Ilya Sutskever, a co-founder of OpenAI, has launched a new company that will address a crucial issue in technology – the potential consequences of AI surpassing human intelligence.

This concern is central to Safe Superintelligence Inc.’s (SSI) mission. The company’s website emphasizes the importance of creating safe superintelligence as “the most critical technical problem of our time.”

Together with OpenAI engineer Daniel Levy and former Y Combinator partner Daniel Gross, SSI aims to prioritize safety in AI development as much as overall capability.

Managing a Powerful Tool

Ilya Sutskever has long been considering the potential advantages and challenges of a superintelligent AI. In a blog post on OpenAI’s website from 2023, Sutskever and Jan Leike explored the possibility of AI systems surpassing human intelligence.

Ilya Sutskever co-founds new Safe Superintelligence Inc.
Credit: Stanford HAI

The post highlighted the lack of a solution for controlling a potential superintelligent AI. While current alignment techniques work for existing AI systems, they rely on human supervision. Yet, these techniques may not scale to superintelligence. New scientific and technical breakthroughs are needed.

Sutskever’s departure from OpenAI signifies his desire to focus on achieving safe superintelligent AI through SSI. The company’s website underscores this singular focus, aiming to attract top engineers and researchers dedicated solely to this goal.

In an interview with Bloomberg, Sutskever mentioned engineering safety protocols within the AI system itself as a key approach.

He emphasized the importance of nuclear safety-like precautions for AI, indicating a specific direction for the technology.

While specifics about SSI’s future plans are scarce, the founders’ commitment to the safe implementation of superintelligent AI tools is evident. SSI is a company worth monitoring in the coming years.

LEAVE A REPLY

Please enter your comment!
Please enter your name here