Sam Altman Seems to Imply That OpenAI Is Building God

0
793

Ever since becoming CEO of OpenAI in 2019, cofounder Sam Altman has made the company’s number one mission to build an “artificial general intelligence” (AGI) that is both “safe” and can benefit “all of humanity.”

And while we haven’t really come to an agreement on what would actually count as AGI, Altman’s own vision remains as lofty as it is vague.

Take this new interview with the Financial Times where Altman dished on the upcoming GPT-5 — and described AGI as a “magic intelligence in the sky,” which sounds an awful lot like he’s implying his company is building a God-like entity.

OpenAI’s own definition of AGI is a “system that outperforms humans at most economically valuable work,” a far more down-to-earth description of what amounts to an omnipotent “superintelligence” for Altman.

In an interview with The Atlantic earlier this year, Altman painted a rosy and speculative vision an AGI-powered future, describing a utopian society in which “robots that use solar power for energy can go and mine and refine all of the minerals that they need,” all without the requiring the input of “human labor.”

And Altman isn’t the only one invoking the language of a God-like AI in the sky.

“We’re creating God,” an AI engineer working on large language models told Vanity Fair in September. “We’re creating conscious machines.”

In April, Tesla CEO and OpenAI cofounder Elon Musk — who recently launched his own AI chatbot called Grok, despite warning about the possibility of an evil AI outsmarting humans and taking over the world for many years — told Fox News that Google founder Larry Page “wanted a sort of digital super-intelligence” which would eventually become “basically a digital god, if you will, as soon as possible.”

“The reason Open AI exists at all is that Larry Page and I used to be close friends and I would stay at his house in Palo Alto and I would talk to him late in the night about AI safety,” Musk added. “At least my perception was that Larry was not taking AI safety seriously enough.”

Musk ragequit OpenAI in 2018 over disagreements with the company’s direction, a year before Altman was appointed CEO.

For someone so dead-set on AGI, the only trouble is that Altman still sometimes sounds very hazy on the details.

“The vision is to make AGI, figure out how to make it safe… and figure out the benefits,” he told the FT, in a vague statement that lacks the degree of specificity you’d expect from the head of a company talking about its number one goal.

But to keep the ball rolling in the meantime, Altman told the newspaper that OpenAI will likely ask Microsoft for even more money, following a $10 billion investment by the tech giant earlier this year.

“There’s a long way to go, and a lot of compute to build out between here and AGI,” he told the FT, arguing that “training expenses are just huge.”

OpenAI is also conveniently allowing its own board to decide when we’ve reached AGI, according to the company’s website, suggesting there’s clearly plenty of wriggle room when it comes to an already hard-to-pin-down topic.

Whether we’ll all be witness to a divine ascension of technology — or, heck, a robot that can help middle schoolers with their homework — remains unclear at best.

Even Altman seemingly has yet to figure out what the “magic intelligence in the sky” will mean for modern society.

But one thing is for certain: it’ll be an extremely expensive endeavor, and he’s looking for more investment.

More on AGI: Google AI Chief Says There’s a 50% Chance We’ll Hit AGI in Just 5 Years

Source link