Ilya Sutskever, OpenAI’s co-founder and former chief scientist, is launching a new AI company that focuses on safety. In a post on Wednesday, Sutskever introduced Safe Superintelligence Inc. (SSI), a startup with a single goal: to develop a safe and powerful AI system.

The announcement outlines SSI as a startup that prioritizes both safety and capabilities, allowing the company to advance its AI system rapidly while keeping safety at the forefront. It highlights the external pressures faced by AI teams at companies like OpenAI, Google, and Microsoft and emphasizes that SSI’s exclusive focus enables it to avoid distractions and delays caused by management or product cycles.

The announcement states, “Our business model ensures that safety, security, and progress are shielded from short-term commercial pressures, allowing us to expand peacefully.” Alongside Sutskever, SSI’s co-founders include Daniel Gross, who previously led AI at Apple, and Daniel Levy, a former member of technical staff at OpenAI.

While OpenAI forges partnerships with Apple and Microsoft, SSI is unlikely to follow suit in the near future. In an interview with Bloomberg, Sutskever explains that SSI’s primary focus will be on developing safe superintelligence as its initial product, with no plans to diversify before achieving this goal.

LEAVE A REPLY

Please enter your comment!
Please enter your name here