Former OpenAI co-founder Ilya Sutskever, who recently left the GPT creator, has announced his new venture: Safe Superintelligence Inc., a company focused on developing a product of the same name, minus the “Inc.”
Currently, the startup seems to consist of a small team of three individuals, a basic HTML webpage, a social media presence, and a defined mission.
The webpage states: “Superintelligence is achievable. Creating safe superintelligence (SSI) is the most crucial technical challenge of our era.”
“We have launched the world’s first dedicated SSI lab, with the singular objective of producing a safe superintelligence.”
Developing an SSI is the core focus of our mission, our identity, and our entire product strategy. Our team, investors, and business model are all geared towards achieving SSI.”
Although the webpage does not reveal the identity of the investors or the specific business model, it is signed by Sutskever, Daniel Gross (former AI head at Apple), and Daniel Levy, another OpenAI alum.
With a three-person team, SSI is aiming to gather a skilled team of engineers and researchers solely dedicated to SSI in Palo Alto and Tel Aviv, Israel.
The company’s vision is outlined as:
The emphasis on prioritizing safety over product cycles is noteworthy, especially considering OpenAI’s past criticisms regarding AI safety, which prompted them to establish a Safety and Security Committee.
When Sutskever departed OpenAI, he expressed confidence in the organization’s ability to create safe and beneficial artificial general intelligence (AGI). However, SSI’s mission suggests a potential lack of confidence in OpenAI’s approach, prompting him to pursue a different path.
While SSI has not disclosed specifics regarding its deliverables, timeline, or safety protocols, this launch appears to prioritize visibility over immediate revenue generation.