Pro-tip for anyone naming a new company, especially in an area as fraught as AI: do not settle on an obvious oxymoron. That’s my note to the well-intentioned Ilya Sutskever, formerly OpenAI’s Chief Scientist and cofounder and now launching his own artificial intelligence firm with an eponymous goal and name: Safe Superintelligence.

While Sutskever isn’t a household name like OpenAI co-founder Sam Altman, he is widely recognized as the guy who may have “solved” superintelligence late last year, a breakthrough that sparked a meltdown at the ChatGPT parent and led to the sudden – but not long-lived – ouster of Altman.

After Altman returned, there were reports that the Super AI, or General AI intelligence breakthrough, something that could quickly lead to AI intelligence outstripping above-average human intelligence, had so freaked out the OpenAI board and Sutskever that they sought to put the brakes on the whole thing. Altman was likely not on board with that, so out he went until cooler heads prevailed.

There is no such thing as ‘Safe Superintelligence.’ Responsible Superintelligence is possible…

By May of this year, Sutskever announced he was leaving OpenAI, news that arrived just days after the company unveiled the eerily powerful GPT-4o (you remember, the one that appears to abscond with Scarlett Johannson’s voice?). At the time, Altman expressed sadness at his partner’s departure and Sutskever would only say he was working on a project “meaningful to him.” No one thought he was about to start throwing clay and selling pottery.

The new company, announced on both X (formerly Twitter) and on a new, spare website, is that passion project in full. It’s a direct response to what clearly left Ilya shaken at OpenAI. On the site, Sutskever explains, Safe Superintelligence “is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI.”

To achieve this goal, the company will pursue super intelligence and safety in tandem, with an emphasis, it seems, on the former.

I suspect Sutskever is not much of a pop culture fan, movie, or even necessarily sci-fi aficionado. Otherwise, how could he or any of his team avoid snickering when saying the company name out loud? He could be forgiven for missing 2020’s poorly-reviewed comedy Superintelligence in which, according to IMDB, “…an all-powerful Superintelligence chooses to study average Carol Peters, the fate of the world hangs in the balance. As the A.I. decides to enslave, save or destroy humanity, it’s up to Carol to prove that people are worth saving.”

While the movie got an abysmal 5.4 rating, it’s not alone in dire predictions of humans versus superintelligence. The term has been around for well over a decade and while few would deny its potential, it’s never had the full sheen of hope and promise. I stumbled on a 2014 book by Nick Bostrom, Superintelligence: Paths, Dangers, Strategies. Note that “Dangers” gets a plumb second billing. The book description ponders, “Superintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us?”

There is scarcely a TV show or movie that sees superintelligent AI differently. When artificial intelligence is smarter than us, all bets are off. It’s the free-floating anxiety of culture, letters, and everyone I know.

Reassure us

Not a day goes by now when I do not have a conversation about AI. It’s not just at work where you’d expect it. It’s with my wife and adult children. It’s at parties and TV shoots. A mixture of excitement and dread is common. No one knows exactly where it’s going and most share a simmering fear that AI will outstrip human intelligence and doom first our careers and then us all. They don’t know the term “superintelligence” but the concept is crystal clear in their minds. It’s not just about AI that’s smarter than us, it’s the potential of superintelligence living in all the devices we carry in our pockets and use on our desktops.

This week, dozens of new laptops arrived with Microsoft’s Copilot+ baked deep inside the silicon. It is not, to be clear, anything approaching superintelligence. In fact, the demos I saw present a quite narrow viewport through which to view AI’s true system-level potential. But as someone noted to me yesterday, if all these AIs get smarter and become aware of each other and of us, especially our foibles, what’s to say they don’t just take control of those systems and our lives?

Imagine soapbox racing downhill, brakeless, and with a driver who only understands how 80% of the controls work, you get the idea.

As someone who covers all this, I can tell you with some certainty that’s not going to happen, at least not in my lifetime.

Even so, I once thought general AI or superintelligence might arrive when I’m a doddering old fool. Now, I predict 18 months.

That’s why I find Sutskever’s company name almost comical. The pace of AI development is moving at an exponential pace. Imagine soapbox racing downhill, brakeless, and with a driver who only understands how 80% of the controls work, you get the idea.

There is no such thing as “Safe Superintelligence.” Responsible Superintelligence is possible and, if I’d been in the room when Sutskever and his team were naming his company I would’ve suggested it. Ultimately, that’s all any of these AI companies can promise, acting in responsible and, perhaps, humane ways. That may lead to “safer” superintelligence, but full safety is illusory at best.

You might also like

LEAVE A REPLY

Please enter your comment!
Please enter your name here