OpenAI announced on Tuesday that it had commenced training a new flagship artificial intelligence model set to replace the GPT-4 technology powering its widely-used online chatbot, ChatGPT.

The company, based in San Francisco and recognized as a leading A.I. firm, stated in a blog post that the upcoming model was expected to introduce “the next level of capabilities” as OpenAI pursued the development of “artificial general intelligence” (A.G.I.) – a machine capable of performing tasks equivalent to the human brain. This new model would serve as the foundation for various A.I. products, including chatbots, digital assistants similar to Apple’s Siri, search engines, and image generators.

OpenAI also revealed plans to establish a Safety and Security Committee to address potential risks associated with the new model and future technologies.

“While we take pride in creating and releasing models that lead the industry in both capabilities and safety, we welcome a thorough discussion at this critical juncture,” the company stated.

OpenAI is aiming to advance A.I. technology faster than its competitors, all while addressing concerns from critics who warn about the growing dangers posed by the technology, including its potential to spread misinformation, replace jobs, and pose threats to humanity. Experts have varying opinions on when tech companies will achieve artificial general intelligence, but companies like OpenAI, Google, Meta, and Microsoft have been steadily enhancing the power of A.I. technologies for over a decade, showcasing significant advancements roughly every two to three years.

OpenAI’s GPT-4, released in March 2023, empowers chatbots and other software applications to answer queries, compose emails, produce written content, and analyze data. An updated version of this technology, unveiled recently and still not widely available, can also create images and engage in conversations with a highly conversational voice.

Shortly after the introduction of the updated version – called GPT-4o – actress Scarlett Johansson remarked that the voice used sounded “eerily similar to mine.” Johansson disclosed that she had declined attempts by OpenAI’s CEO, Sam Altman, to license her voice for the product and had taken legal action to request that OpenAI cease using her voice. OpenAI countered by stating that the voice was not that of Ms. Johansson.

Technologies like GPT-4o acquire their skills by analyzing extensive digital data comprising sounds, images, videos, Wikipedia entries, books, and news articles. The New York Times filed a lawsuit against OpenAI and Microsoft in December, alleging copyright infringement related to news content about A.I. systems.

Training A.I. models digitally can span months or even years. Following the training phase, A.I. companies typically undergo several additional months of testing and refinement before making the technology publicly available.

This suggests that OpenAI’s next model may not be ready for another nine months to a year or longer.

As OpenAI works on training its new model, the newly established Safety and Security committee will focus on formulating policies and procedures to ensure the technology’s safety, the company stated. The committee includes Mr. Altman, alongside OpenAI board members Bret Taylor, Adam D’Angelo, and Nicole Seligman. OpenAI anticipates implementing the new policies in late summer or fall.

Earlier this month, OpenAI announced the departure of Ilya Sutskever, a co-founder and prominent figure in its safety initiatives. Concerns arose that OpenAI was not adequately addressing the dangers posed by A.I.

Dr. Sutskever, along with three other board members, moved to oust Mr. Altman from OpenAI in November, citing a lack of trust in his handling of the company’s plans to develop artificial general intelligence for the benefit of humanity. Following a lobbying campaign by Altman’s supporters, he was reinstated five days later and has since resumed control of the company.

Dr. Sutskever led what OpenAI termed its Superalignment team, which explored methods to ensure that future A.I. models would not cause harm. Like others in the field, he had become increasingly wary of the potential threats posed by A.I.

Jan Leike, who collaborated with Dr. Sutskever on the Superalignment team, resigned from the company this month, leaving uncertainty surrounding the team’s future.

OpenAI has folded its long-term safety research into its broader efforts to ensure the safety of its technologies. This initiative will be overseen by John Schulman, another co-founder, who previously led the team behind ChatGPT. The new safety committee will supervise Dr. Schulman’s research and offer direction on how the company will address technological risks.

LEAVE A REPLY

Please enter your comment!
Please enter your name here