The last few days have been topsy-turvy in the tech world, thanks to the high-stakes drama that unfolded at OpenAI, the pioneer in AI technologies of today. It all began with the board consisting of Adam D’Angelo, Tosha McCauley, Ilya Sutskever, and Helen Toner, sacking Sam Altman. A lot of back and forth followed that, with Microsoft offering Altman a job to head a new advanced AI research team.
Meanwhile, nearly 700 employees of the 770 staff from OpenAI wrote an open letter pledging allegiance to Altman and threatened to quit and join Microsoft if the board was not dissolved and Altman was not reinstated. The four-day exile of Altman led to numerous speculations about the cause, from disagreement with board members over products, lack of consistent communication, and differences over AI safety.
Even as all these stories floated around, several staff researchers had reportedly written to the board of directors about the discovery of a powerful AI that could potentially threaten the existence of humanity. Now, more fingers are pointing at this mysterious AI model that could have led to all the drama and chaos at OpenAI. Do note that there is some ambiguity regarding the receipt of this letter, with The Verge reporting that several sources denied its reception by the board.
According to a report in The Information, earlier this year, a team led by OpenAI’s lead scientist Ilya Sutskever made a breakthrough with AI. This later allowed them to build a new model named Q* (read Q-star). It was reported that this new model could solve basic mathematical problems. However, this technological breakthrough also triggered some fears among staff who felt that the AI company did not have enough safeguards in place to ‘commercialise’ such an advanced model.
What is Q*?
Q* is essentially an algorithm that is capable of solving elementary mathematical problems by itself, including those that are not part of its training data. This makes it a significant leap towards the much anticipated Artificial General Intelligence (AGI) – a hypothetical capability of AI that makes it perform any intellectual task that the human brain can do. This breakthrough is credited to Sutskever and has been further developed by Szymon Sidor and Jakub Pachoki. The model Q* demonstrates advanced reasoning capabilities similar to humans.
Reportedly, the breakthrough is part of a larger initiative by an AI scientist team that has been formed by combining the Code Gen and Math Gen teams at OpenAI. The team focuses on enhancing the reasoning capabilities of AI models for scientific tasks.
Why is it feared so much?
The letter from the researchers reportedly outlined concerns surrounding the system’s potential ability to accelerate scientific progress. At the same time, it also questioned the adequacy of safety measures deployed by OpenAI. According to a report in Reuters, the model reportedly provoked an internal outcry, with staff stating that it could threaten humanity. This warning is believed to be one of the major reasons that led to the sacking of Altman.
Interestingly, Altman had alluded to the making of this model during the interaction at the APEC CEO Summit. He reportedly spoke about the recent technological advance, describing it as something that allowed them to “push the veil of ignorance back and the frontier of discovery forward”. Ever since the OpenAI boardroom saga, this comment by Altman has been seen as him hinting at this breakthrough model.
Some reasons why Project Q* could be a threat to humanity:
Advanced logical reasoning and understanding of abstract concepts: All the reports on the internet as of now suggest that Q* has the ability of logical reasoning and understanding abstract concepts. This is a tremendous leap as no AI model so far is capable of it. While on a practical level, it is a breakthrough but this could also lead to unpredictable behaviours or decisions that humans may not be able to foresee or understand beforehand.
Deep learning and programmed rules: Sophia Kalanovska, a researcher told Business Insider that the name Q* implied a fusion of two known AI methods such as Q-learning and A* search. She said that the new model could combine deep learning with rules programmed by humans, and this may make the model more powerful and versatile than any other current AI model. Essentially, this could lead to an AI model that not only learns from data but also applies reasoning like humans, which makes it difficult to control or predict.
A giant leap towards AGI: Q* is seen as a step closer to achieving AGI, something that has been a matter of contention in the AI community. It needs to be noted that Altman has been optimistic about AGI and had, in a recent interview, said that it could be likely in the next decade. AGI is an AI that possesses the ability to understand, learn, and apply this knowledge across different domains, just like human intelligence. AGI could likely surpass human capabilities in many areas, and this may lead to issues of control, safety, and ethics.
Capability to develop new ideas: As of now, AI models primarily regurgitate existing information, Q* will be a milestone as it will be able to generate new ideas and solve problems even before they happen. The downside of this could be that it could enable AI to make decisions or actions that are beyond human control or comprehension.
Unintended consequences and misuse: The advanced capabilities of Q* could also lead to possible misuse or unintended consequences. If in the wrong hands, an AI of this magnitude could spell doom for humanity. Even if someone deploys it with good intentions, the complexity of Q*s reasoning and decision-making could well lead to outcomes that could prove damaging to humanity.
The above-listed concerns are based on whatever has been discussed on the internet related to Project Q*. It needs to be noted that there is no official information concerning the project, barring an alleged letter from OpenAI researchers. The advanced capabilities, as listed in the public domain, can likely spell out the concerns mentioned above. These concerns underscore the need for thoughtful consideration and need for strong ethical and safety frameworks in the development of such advanced AI technologies.
© IE Online Media Services Pvt Ltd
First published on: 25-11-2023 at 12:09 IST