The ex-chief scientist of OpenAI is introducing a new AI startup centered on secure ‘superintelligence’

0
95
OpenAI's former chief scientist is launching a new AI startup focused on safe 'superintelligence'

After nearly ten years overseeing superintelligence as the chief scientist of the artificial intelligence startup OpenAI, Ilya Sutskever departed in May. Just one month later, he has launched his own AI company — Safe Superintelligence Inc.

“We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence,” Sutskever wrote Wednesday on X, formerly Twitter. “We will do it through revolutionary breakthroughs produced by a small cracked team.”

Sutskever, along with former Apple AI director Daniel Gross and ex-OpenAi researcher Daniel Levy, has founded the startup with offices in Tel Aviv and Palo Alto, California.

Together with Jan Leike, who now works at Anthropic, an AI firm founded by former OpenAI employees, Sutskever previously led OpenAI’s Superalignment team, focused on controlling AI systems to ensure their safety and prevent harm to humanity. The team disbanded after their departure.

Safe Intelligence, as suggested by their name, will prioritize safety efforts similar to those of Sutskever’s prior team.

“SSI is our mission, our name, and our entire product roadmap, because it is our sole focus,” the company’s founders stated in a public letter. “Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”

The company also pledges that their laser focus will prevent management interference or product cycles from hindering their work. Several former OpenAI members, including founding member Andrej Karpathy, have left the organization recently. Some former staff members signed an open letter expressing concerns about “serious risks” at OpenAI regarding oversight and transparency.

Sutskever, a former OpenAI board member who sought to remove co-founder and CEO Sam Altman in November, was later reinstated. The board had reservations about Altman’s management of AI safety and allegations of misconduct. Ex-board member Helen Toner described Altman’s deceitful behavior and manipulation as creating a toxic environment labeled as “toxic abuse” by executives.