“Safe superintelligence should have the property that it will not harm humanity at a large scale.”

Keep It Vague

After departing OpenAI amidst controversy, co-founder and former chief scientist Ilya Sutskever is launching his own company to develop “safe” artificial superintelligence.

In a post on X-formerly-Twitter, Sutskever announced the creation of Safe Superintelligence Inc, or SSI for short, after his turbulent exit from OpenAI.

“We will pursue safe superintelligence with a singular focus, goal, and product,” Sutskever explained in a subsequent tweet. “We will achieve this through groundbreaking innovations from a dedicated team.”

Details regarding Sutskever’s new venture are shrouded in mystery, with the co-founder keeping his statements intentionally ambiguous during an interview with Bloomberg.

“Fundamentally, safe superintelligence should prioritize not causing harm to humanity on a large scale,” he told the publication. “Following this, we aim for it to be a force for good, guided by core values such as liberty, democracy, and freedom that have historically shaped successful societies.”

While challenges lie ahead, Sutskever appears confident in his pursuit of safe superintelligence.

AI Guys

Although not explicitly stated, Sutskever’s remarks allude to the tumultuous events surrounding his involvement in the dismissal of OpenAI CEO Sam Altman last year.

Speculation suggests that concerns over a secretive AI project named Q* were a catalyst for the rift between Sutskever and Altman, linking safety considerations to Sutskever’s latest endeavor.

The path ahead for SSI remains unclear, with co-founder Daniel Gross expressing assurance in the company’s financial prospects.

As SSI joins the ranks of OpenAI competitors pursuing advanced AI technologies, its esteemed founders and noble ambitions set it apart in the industry.

More on OpenAI: It Turns Out Apple Is Only Paying OpenAI in Exposure


Please enter your comment!
Please enter your name here