AGI Pieces from OpenAI: Industry Shock as AGI Achieved in Just 7 Months! Explore GPT, AI Agents, Sora & Search

Post date:

Author:

Category:

There’s something that’s emerging right now that we need to talk about

There’s something that’s emerging right now that we need to talk about

And yes, it’s quite shocking.

The thing is, Agi, there’s only one rule to remember: don’t panic.

Let’s go down this rabbit hole and see how far it goes. By now, you’ve probably seen OpenAI’s latest release, Sora, the text-to-video AI generation platform. It’s very cool and a lot of people are very impressed with the lifelike images that it can produce. But a lot of people are missing what this thing is, what it represents.

Recently, there’s been a lot of talk about AGI (artificial general intelligence). When Sam Altman started talking about AGI, he was mocked. People said it wasn’t serious to talk about building AGI. But after they released Chad GPT, people no longer mocked him. We have been misunderstood and badly mocked for a long time. Like when we started, we announced the AI at the end of 2015. I remember at the time, an eminent AI scientist at a large industrial AI lab was DMing individual reporters, saying, “You know, these people aren’t very good, and it’s ridiculous to talk about AGI. I can’t believe you’re giving them the time of day.” That was the level of pettiness and rancor in the field at this new group of people saying we’re going to try to build AGI. So OpenAI and DeepMind, a small collection of folks who were brave enough to talk about AGI in the face of mockery.

We don’t get mocked as much now. More people believe that maybe we’ll see AGI in the upcoming decades. But here’s the thing, everyone, including the people that study the stuff, keeps getting thrown by the exponential growth, by the compounding. We’re beginning to hit the second half of the chessboard, and things are about to get a little bit nutty. Let’s dive in.

There’s this expression, the second half of the chessboard. Now, the story about the second half of the chessboard is that the inventor of Chess presented his brilliant creation to a grateful Indian king. The king asked what was the desired reward. The inventor of Chess asked for something very simple: for a grain of rice to be placed on the first square, then double down the amount on the second square, and so on until every square was covered. So basically, it goes 1, 2, 4, 8, 16, etc., doubling each and every legal time. This might not seem like a lot, just a few grains of rice, but where this exponential growth, where this compounding gets really hairy is in the second half of the chessboard. It’s like a thousand times how much rice is produced in the world today. The point is, it was way above anything that the king could have paid. The point of all this is that AI progress is entering the second half of the chessboard. It gets really hard to predict what that looks like and how it will unfold. A lot of people are asking, when will AGI be here? I don’t like that question. Here’s why.

This was from the blog “Wait But Why” from 2015. It had this excellent demonstration of how AI would look like when it arrived. So just pretend that sign says AGI, and this is the AGI train station. Here, everybody standing around wondering when will the AGI train get here? Will it be here in 7 months or 12 months or 5 years? Because we’re noticing some signs that it’s coming. Hey, look, AGI is arriving, right? We’re seeing it off in the distance, yonder. So when is it going to pull into the station so that we can greet it and see what it looks like? Well, here’s the thing. It’s coming fast, and now it’s gone. Did you miss it? Here’s another chart that kind of shows you how it’s going to happen. This is the highest intelligence on Earth, assuming this is like from the beginning of time. So slowly living forms are getting smarter, smarter, smarter, and here’s the trip. This is where we create something like self-improving AI, and whoosh, it’s a vertical line. Tim Urban was the person that runs this blog “Wait But Why.” I believe he has a book out now. Apparently, Elon Musk is a big fan of this blog. He’s tweeted about it in the past. Keep in mind, this blog post is from 2015, right around the time that OpenAI was just opening into doors and being mocked for working on AGI. Building AGI, and so back then, the median expert prediction for AGI was 2040, and ASI (artificial superintelligence) was 2060. Here’s Kathy Wood of ARK Invest, along with some of her analysts that work on this stuff. This is what they’ve predicted.

So here’s the chart of when experts believe we will achieve General artificial intelligence, you know, and how that changed in 2019. You know, we thought it was 50 years away, a year later, 34 years away, a year later, 18 years away, a year later, we thought it was 8 years away. So based on this, their forecast was that by 2030, about 6 years from now, we’re going to have AGI. However, progress was so much faster that there was a forecast error. Assuming that it continues in this way, we’re going to have AGI by 2026, 2 years from now. And all the other papers that we covered here kind of say the same thing. All of the experts keep getting surprised by the rapid acceleration of AI. Seven years ago, Sam Altman was mocked for saying they’re building AGI. Now it’s not crazy to say that it’s going to be here soon. When exactly? Well, it doesn’t matter. This is us, and the next moment it’ll pass whatever metric we choose to measure AGI by before the end of the video. I’ll even show you why there’s a chance that it has already been achieved; it just hasn’t been distributed. In my previous video, we went over something that Dr. Jim Fan, the senior research scientist at Nvidia, said. The point was that this video generation model, it’s learning physics. Its ability to simulate physics is an emerging property. As we put in more data and we scale up the resources, these abilities emerge, they start existing. More data and more compute translate into these digital brains being able to sort implicitly acquire new skills. They learn to do stuff that we don’t teach them. And whenever I post a video talking about this, there’s always a vocal minority in the comment section yelling that this can’t possibly be true and we’re just imagining things. None of this is happening, etc. But it is rapidly becoming the view that more and more people have. OpenAI is talking about it more openly. They’re not talking about S as a little video generator or picture generator. They’re talking about it as a world simulator. They’re saying this is a promising path towards building general-purpose simulators of the physical world. Sora serves as a foundation for models that can understand and simulate the real world, a capability we believe will be an essential milestone for achieving AGI.

Now let’s pivot to OpenAI for just a second here. How is OpenAI building AGI? Is Chad GPT, the thing that will become AGI? Is Sora, the thing that will become AGI? Well, not quite. Think of AGI and eventually ASI, the super-intelligent form of AGI, as a collection of pieces, each with its power and abilities that when put together become AGI, the big thing. So what are those pieces? We now know Sora, the world simulator, we know Chad GPT, the fastest-growing AI model of all time, they can now see and speak and understand and is, in and of itself, a powerful form of AI. What are the other pieces? Well, one of them is Agents. Sam Altman hinted at this at the developer conference in November. Actually, I don’t know if he was hinting at that; it was kind of vague. But I think myself and a lot of other people assumed that’s what he was talking about, autonomous AI agents. These new leaks kind of confirmed that we were all kind of right in assuming that because it seems like OpenAI is rapidly moving towards developing autonomous AI agents. Sam Altman has privately called the next iteration of Chad GPT a super-smart personal assistant for work. That agent will take over people’s computers and basically do a lot of the tasks for them. It will be like an operating system that does things for you. So instead of clicking on things or typing something in, you will communicate with the agent in whatever form that it takes with voice or typing, and it will then go and do the things that you request.

We’ve tested a few of these AI agents on this channel. You give it some high-level tasks; it thinks about it, breaks it up into subtasks, and then starts carrying those out. OpenAI isn’t the only one. Google CEO Sundar Pichai said the latest technology allows us to act more like an agent over time. There are other companies that are doing the same thing. The rabbit R1 device is doing something similar. There’s Mullen, which we’ve covered here. There’s OpenInterpreter, there’s a self-operating computer I believe it’s called. There’s a bunch of them that are trying to do this. All right, so we have Chad GPT, we have Sora, we have an agent. What else is there? Well, OpenAI develops a web search product. In a challenge to Google, so we’re not sure if that search product is separate from J GPT or part of Chad GPT. So let’s say search is the left leg of the Forbidden One or AGI. You know what else is there? Well, the massive, massive, massive amount of chips or AI chips of compute of GPUs or TPUs or whatever other processors that we need to train these AI models and run inference, inference meaning getting the outputs, the predictions that we’re looking for.

Alman has already had a lot of interest from Middle East funds as well as a lot of the Chinese money that flowed into it. The US government, one of the agencies I believe canceled and reversed one of the deals that they had for chips made by Rain Neuromorphic that was in San Francisco. But it looks like Sam Altman is not giving up. So now he’s asking the Biden Administration for approval. Sam Altman is said that he’s looking for $8 trillion in funding to build the infrastructure to produce these chips that are needed. So that’s $8 trillion in funding. Actually, scratch that. It’s now eight, but the point is we need a lot of money to build these factories to produce the required number of chips that’ll be able to power everything that we want AGI to do. Now once we have all the pieces in place, once we have the GPT and the Sora and the Dolly and the search and the autonomous agents and all the chips, apparently, there’s also a music/sound generation thing that’s not released yet, but that’s coming soon. I actually got early access to this Sora plus Audio model, and here’s some early results: two golden retrievers podcasting on top of a mountain. Here’s what that sounds like.

So I’ve got to say, not bad. I expect them to start climbing the iTunes podcast rankings very rapidly. But the point that I’m trying to make is this thing that we think of as AGI is likely going to be a multi-part thing, each with its own effects, each with its strengths that when combined becomes this thing that is now able to do most human jobs. For example, coding assistants are already helping coders work faster, replacing some of the work they have to do. Google quietly launches an internal AI model named GOOSE to help employees write code faster. Leaked documents show that models like GPT will help with coding, and if you’ve played around with code interpreter, aka advanced data analytics, a lot of that is going to really help with, well, data analytics, a lot of things that we used to have AI for can now be done with few sentences. They can go through Excel sheets, organize it how you want to display various charts, etc. A lot of the coding jobs, a lot of the data analysis jobs, a lot of the writing jobs, like that part replaces a lot of the people that are doing that sort of work.

Next, Sora. Sora produces videos and images, etc. So think about who does that displace? What kind of skills and people and work environments does that displace? Take a look at something like this. So let’s say you wanted to shoot something in Tokyo. What do you do? Well, you had to fly over there or hire somebody on location to shoot. You would need the actors, the editors, the photographers, the camera operators. But not only that, because you also need all the people that produce the equipment, the people that produce the cameras, the lighting equipment, the microphones, the storage disks for things like this. You needed to hire the special effects creators for movies so they can help you produce visually stunning effects like this. If we’re able to generate images that are similar to that, all those jobs are affected. The next part is agents, and agents are going to be everything that you need. That’s going to be kind of like your assistant, anything you need them to answer emails or schedule appointments, do your research online for you, complete whatever task you need with Excel, just anything where you need stuff actually done in the digital world, right? And potentially, you know, at some point also over the phone. OpenAI has their whisper model that’s another part of this so when you talk to it, it’s able to transcribe what you’re saying through words. So then GPT 4 can then understand what you’re saying. If you wanted your agent to call a restaurant and, for example, make a reservation for you, that would be something like an agent plus GPT, right? That’s Chad GPT plus something like whisper that’s able to transcribe it. And then right now, we’d have to use Lyrebird to make the voice right to make it sound like a human being is saying. So as Chad GPT outputs the text, it gets transcribed into an AI voice. But again, as I’ve mentioned before, it sounds like OpenAI has that cooking behind the scenes, that voice model, that Audio model. The preview that I gave you, that was just me breathing into your microphone. I hope that was okay.

And of course, the chips, the GPUs that are needed. That’s like the last piece. I don’t even know if that’s necessarily like you would think of that as a piece of AGI, or does that just increase the scale? Depending on how you want to think about this. But the point is when you have all these little pieces on the board together, the thing that emerges, the thing that starts slowly floating through the portal, well, that’s AGI. That’s the thing that will be able to think, to learn, to do, to produce images and voices, and understand all the images and videos that you give it, to understand what you mean when you say things. So when is that coming? When are we going to have that? Well, this is Jimmy Apples. We’ve mentioned them a few times on this channel before. And as I’ve said before, take everything here with a grain of salt. Jimmy apples is what appears to be an OpenAI insider, somebody that knows quite a bit about what’s happening at OpenAI and who leaks this information in cryptic tweets every once in a while. Again, I’m not endorsing anything. I’m not saying that this is true. In fact, if at some point, this person is just completely wrong about something, we can completely dismiss whatever he predicts. I feel like my job would get a little bit easier. But here’s the problem. He is eerily accurate, which makes it very difficult to dismiss everything that he’s saying.

On this channel, we’re going to look at everything, we’re going to look at the scientific papers, the data, and we’re also going to look at the crazies, the conspiracy theorists. This is going to be a full-spectrum AI channel. So let’s get started. Here’s Yum Idiot. I’m going to say, and they’re saying, after Sora, it became very difficult for me not to connect the dots and come to the astounding conclusion that OpenAI already has AGI. One dot is obviously the existence of Apple and his leaks. Apple being Jimmy Apples and his leaks. I’m curious if this is a typo or a clever play on words because these are leaks, right? But whatever Apple, in his leaks. So normal people think Apple is just a legendary leaker, but in my opinion, he’s a legendary prophet revealing the divine plans of the god emperor that is Sam Altman. Apple scores very well. March 14th, GPT 4 drop score. He predicted that. GOI and Arais names score. He predicted that, that was confirmed by the information. I believe whatever Arais was, Arais is the alternate name for the planet in Dune, the science fiction novel Dune that is now a movie on Netflix. I actually never read Dune, surprisingly. It’s one of the classic sci-fi books. I just recently got it and am going through it. Then they continue, SF firing score. This was the thing for me that truly solidified that what this guy was saying is likely has some grip on reality. This was October 24th, 2023.

Okay, here’s the full audio 2-hour podcast of the two golden retrievers podcasting on top of a mountain. I’m kidding. Bye.

INSTAGRAM

Leah Sirama
Leah Siramahttps://ainewsera.com/
Leah Sirama, a lifelong enthusiast of Artificial Intelligence, has been exploring technology and the digital world since childhood. Known for his creative thinking, he's dedicated to improving AI experiences for everyone, earning respect in the field. His passion, curiosity, and creativity continue to drive progress in AI.