Ex-Chief Scientist of OpenAI Starts New AI Lab Focused on Safety – Decrypt

0
70
Former OpenAI Chief Scientist Launches ‘Safe’ Rival AI Lab - Decrypt

Renowned AI researcher Ilya Sutskever, formerly the chief scientist at OpenAI, has launched a new AI research firm focusing on an area he believes was neglected by his previous employer: safety.

The new company, Safe Superintelligence Inc. (SSI), was co-founded by Daniel Gross, a former AI lead at Apple, and Daniel Levy, who previously worked at OpenAI. SSI’s sole purpose is to advance AI safety and capabilities simultaneously.

On Wednesday, the new firm tweeted their commitment to pursuing safe superintelligence through groundbreaking innovations by a small and dedicated team.

Sutskever left OpenAI last year due to internal conflicts, particularly regarding safety versus profit priorities. With SSI, he aims to avoid distractions from management or product cycles prevalent in larger AI companies.

An official statement on the SSI website highlights their commitment to long-term safety, security, and progress without succumbing to immediate commercial pressures.

Unlike OpenAI, SSI will solely focus on research without commercializing AI models, emphasizing a safe superintelligence as their only product.

Safe superintelligence, as defined by Sutskever, aims to operate akin to nuclear safety rather than just trust and safety. The company’s mission aligns with human interests, prioritizing development over distractions like shiny products.

SSI’s AI systems are envisioned to be more versatile and advanced than current models, with the ultimate goal of creating a safe superintelligence rooted in values like liberty and democracy.

Headquartered in the United States and Israel, SSI is currently hiring, offering a chance to contribute to solving one of the most crucial technical challenges of our time.

This initiative follows OpenAI’s decision to disband its super alignment team responsible for long-term AI safety, prompting key departures and criticism.

Former OpenAI safety researchers, like Leopold Aschenbrenner and Jan Leike, have voiced concerns over safety practices, leading to their departure from the company to pursue safer AI development at competitor firms.

Gretchen Krueger, a policy researcher at OpenAI, also left citing similar concerns regarding safety priorities.

Allegations of a toxic work culture at OpenAI by former board members led to changes in employee policies and practices.

Edited by Ryan Ozawa.