Dr. Ilya Sutskever has a vision. It’s not the usual public vision of high-tech and artificial intelligence (AI) entrepreneurs to develop products that will change the world and bring them to market as quickly as possible. Nor is it the non-public but implied vision of creating a business that will make them billions, or at least millionaires. It’s a different kind of vision, one that is not common in the world of technology these days: to make a product, in the case of his new company, general artificial intelligence, that will be first and foremost safe, even if it takes time to reach the market.

He stated in an interview with Bloomberg that his new company’s first product will be a safe supercomputer, and it won’t focus on anything else until then. There is reason to be skeptical of such a statement, given past experiences. However, there is also an opening to believe in Sutskever. In his previous role as founder, director, and chief scientist of OpenAI, he risked everything to lead a move to oust founder and CEO Sam Altman due to concerns that the company was not investing enough in safety and risk prevention in its products. The move ultimately failed.

1 View gallery



מוסף שבועי 8.6.23 מימין ד

Ilya Sutskever.

(צילום: אביגיל עוזי)

About a month ago, realizing that nothing in the company was going to change for the better in this regard, he once again risked everything and left his secure position at OpenAI in favor of his new venture—a company called Safe Superintelligence (SSI). This company aims to create artificial intelligence in a way that other companies, which operate on a capitalist basis, are not able to do. He mentioned that the company will be completely insulated from external pressures of dealing with a big, complicated product and being stuck in a competitive rat race.

LEAVE A REPLY

Please enter your comment!
Please enter your name here