OpenAI co-founder and former chief scientist Ilya Sutskever has announced the launch of a new AI firm focusing on developing a “safe superintelligence.”

The firm, called Safe Superintelligence Inc., is co-founded by former OpenAI member Daniel Levy and former Apple AI lead Daniel Gross, as per the announcement on June 19 (source).

Safe Superintelligence Inc. believes that safe superintelligence is achievable and views it as the most crucial technical challenge of our time.

The company’s primary goal is to create a safe superintelligence lab with technology as its main product and safety as its top priority. They stated:

“We are building a focused team of exceptional engineers and researchers dedicated solely to safe superintelligence development.”

The company aims to advance its capabilities swiftly while emphasizing safety. They are determined to not be swayed by management, overhead, short-term commercial demands, or product cycles.

“This approach allows us to grow without distractions.”

Safe Superintelligence Inc. noted that investors support their commitment to prioritizing safe development above all else.

In a recent Bloomberg interview (source), Sutskever declined to disclose the company’s financial backers or the amount raised, while Gross mentioned that raising capital will not be an issue.

Safe Superintelligence Inc. will be headquartered in Palo Alto, California, with additional offices in Tel Aviv, Israel.

Launch follows safety concerns at OpenAI

The establishment of Safe Superintelligence comes after a conflict at OpenAI involving Sutskever, who was part of a group that sought to remove CEO Sam Altman in November 2023.

Initial reports, such as from The Atlantic, indicated safety concerns within the company during the dispute. A leaked internal memo suggested that Altman’s removal attempt was linked to a breakdown in communication between him and the board of directors.

Following the incident, Sutskever remained out of the public eye for several months and officially left Open AI in May without providing reasons. Recent events at OpenAI have brought AI safety concerns to the forefront.

OpenAI employees Jan Leike and Gretchen Krueger departed, citing safety worries. Additionally, reports from Vox (source) mention at least five other safety-conscious employees leaving since November.

In his interview with Bloomberg, Sutskever

Previous articleGet a better job with AI by just using your mobile! Have you tried it?
Next articleHere are the steps to use this AI App that we are building. Literally costs nothing to use. * Reco

LEAVE A REPLY

Please enter your comment!
Please enter your name here