Why Won’t OpenAI Say What the Q* Algorithm Is?

0
847

Last week, it seemed that OpenAI—the secretive firm behind ChatGPT—had been broken open. The company’s board had suddenly fired CEO Sam Altman, hundreds of employees revolted in protest, Altman was reinstated, and the media dissected the story from every possible angle. Yet the reporting belied the fact that our view into the most crucial part of the company is still so fundamentally limited: We don’t really know how OpenAI develops its technology, nor do we understand exactly how Altman has directed work on future, more powerful generations.

This was made acutely apparent last Wednesday, when Reuters and The Information reported that, prior to Altman’s firing, several staff researchers had raised concerns about a supposedly dangerous breakthrough. At issue was an algorithm called Q* (pronounced “Q-star”), which has allegedly been shown to solve certain grade-school-level math problems that it hasn’t seen before. Although this may sound unimpressive, some researchers within the company reportedly believed that this could be an early sign of the algorithm improving its ability to reason—in other words, using logic to solve novel problems.

Math is often used as a benchmark for this skill; it’s easy for researchers to define a novel problem, and arriving at a solution should in theory require a grasp of abstract concepts as well as step-by-step planning. Reasoning in this way is considered one of the key missing ingredients for smarter, more general-purpose AI systems, or what OpenAI calls “artificial general intelligence.” In the company’s telling, such a theoretical system would be better than humans at most tasks and could lead to existential catastrophe if not properly controlled.

An OpenAI spokesperson didn’t comment on Q* but told me that the researchers’ concerns did not precipitate the board’s actions. Two people familiar with the project, who asked to remain anonymous for fear of repercussions, confirmed to me that OpenAI has indeed been working on the algorithm and has applied it to math problems. But contrary to the worries of some of their colleagues, they expressed skepticism that this could have been considered a breakthrough awesome enough to provoke existential dread. Their doubt highlights one thing that has long been true in AI research: AI advances tend to be highly subjective the moment they happen. It takes a long time for consensus to form about whether a particular algorithm or piece of research was in fact a breakthrough, as more researchers build upon and bear out how replicable, effective, and broadly applicable the idea is.

Take the transformer algorithm, which underpins large language models and ChatGPT. When Google researchers developed the algorithm, in 2017, it was viewed as an important development, but few people predicted that it would become so foundational and consequential to generative AI today. Only once OpenAI supercharged the algorithm with huge amounts of data and computational resources did the rest of the industry follow, using it to push the bounds of image, text, and now even video generation.

In AI research—and, really, in all of science—the rise and fall of ideas is not based on pure meritocracy. Usually, the scientists and companies with the most resources and the biggest loudspeakers exert the greatest influence. Consensus forms around these entities, which effectively means that they determine the direction of AI development. Within the AI industry, power is already consolidated in just a few companies—Meta, Google, OpenAI, Microsoft, and Anthropic. This imperfect process of consensus-building is the best we have, but it is becoming even more limited because the research, once largely performed in the open, now happens in secrecy.

Over the past decade, as Big Tech became aware of the massive commercialization potential of AI technologies, it offered fat compensation packages to poach academics away from universities. Many AI Ph.D. candidates no longer wait to receive their degree before joining a corporate lab; many researchers who do stay in academia receive funding, or even a dual appointment, from the same companies. A lot of AI research now happens within or connected to tech firms that are incentivized to hide away their best advancements, the better to compete with their business rivals.

OpenAI has argued that its secrecy is in part because anything that could accelerate the path to superintelligence should be carefully guarded; not doing so, it says, could pose a threat to humanity. But the company has also openly admitted that secrecy allows it to maintain its competitive advantage. “GPT-4 is not easy to develop,” OpenAI’s chief scientist, Ilya Sutskever, told The Verge in March. “It took pretty much all of OpenAI working together for a very long time to produce this thing. And there are many, many companies who want to do the same thing.”

Since the news of Q* broke, many researchers outside OpenAI have speculated about whether the name is a reference to other existing techniques within the field, such as Q-learning, a technique for training AI algorithms through trial and error, and A*, an algorithm for searching through a range of options to find the best one. The OpenAI spokesperson would only say that the company is always doing research and working on new ideas. Without additional knowledge and without an opportunity for other scientists to corroborate Q*’s robustness and relevance over time, all anyone can do, including the researchers who worked on the project, is hypothesize about how big of a deal it actually is—and recognize that the term breakthrough was not arrived at via scientific consensus, but assigned by a small group of employees as a matter of their own opinion.

Source link