The former OpenAI chief scientist, known for his attempted mutiny against the CEO, is now launching his own AI company along with colleagues from OpenAI and a former Apple AI executive.
Ilya Sutskever, along with Daniel Levy and Daniel Gross, are founding Safe Superintelligence, a company focused on building safe superintelligence, which they believe to be the most crucial technical problem of our time. Artificial superintelligence (ASI) is considered the next big breakthrough in AI, surpassing general-purpose intelligence comparable to humans.
Sutskever’s departure from OpenAI revealed deep governance issues within the company, as conflicts arose over the allocation of resources and treatment of departing employees. These instances cast doubt on the company’s commitment to its original purpose of developing AI for the benefit of all humanity.
Sutskever, a prominent figure in the AI field, has been instrumental in various groundbreaking developments, including the creation of AlexNet, a deep neural network that revolutionized computer vision. With the launch of Safe Superintelligence, Sutskever aims to create ASI with robust safeguards while insulating the company from short-term commercial pressures.