Saturday, April 20, 2024
HomeArtificial Intelligence NewsAGI Pieces from OpenAI: Industry Shock as AGI Achieved in Just 7...

AGI Pieces from OpenAI: Industry Shock as AGI Achieved in Just 7 Months! Explore GPT, AI Agents, Sora & Search

There’s something that’s emerging right now that we need to talk about

There’s something that’s emerging right now that we need to talk about

And yes, it’s quite shocking.

The thing is, Agi, there’s only one rule to remember: don’t panic.

Let’s go down this rabbit hole and see how far it goes. By now, you’ve probably seen OpenAI’s latest release, Sora, the text-to-video AI generation platform. It’s very cool and a lot of people are very impressed with the lifelike images that it can produce. But a lot of people are missing what this thing is, what it represents.

Recently, there’s been a lot of talk about AGI (artificial general intelligence). When Sam Altman started talking about AGI, he was mocked. People said it wasn’t serious to talk about building AGI. But after they released Chad GPT, people no longer mocked him. We have been misunderstood and badly mocked for a long time. Like when we started, we announced the AI at the end of 2015. I remember at the time, an eminent AI scientist at a large industrial AI lab was DMing individual reporters, saying, “You know, these people aren’t very good, and it’s ridiculous to talk about AGI. I can’t believe you’re giving them the time of day.” That was the level of pettiness and rancor in the field at this new group of people saying we’re going to try to build AGI. So OpenAI and DeepMind, a small collection of folks who were brave enough to talk about AGI in the face of mockery.

We don’t get mocked as much now. More people believe that maybe we’ll see AGI in the upcoming decades. But here’s the thing, everyone, including the people that study the stuff, keeps getting thrown by the exponential growth, by the compounding. We’re beginning to hit the second half of the chessboard, and things are about to get a little bit nutty. Let’s dive in.

There’s this expression, the second half of the chessboard. Now, the story about the second half of the chessboard is that the inventor of Chess presented his brilliant creation to a grateful Indian king. The king asked what was the desired reward. The inventor of Chess asked for something very simple: for a grain of rice to be placed on the first square, then double down the amount on the second square, and so on until every square was covered. So basically, it goes 1, 2, 4, 8, 16, etc., doubling each and every legal time. This might not seem like a lot, just a few grains of rice, but where this exponential growth, where this compounding gets really hairy is in the second half of the chessboard. It’s like a thousand times how much rice is produced in the world today. The point is, it was way above anything that the king could have paid. The point of all this is that AI progress is entering the second half of the chessboard. It gets really hard to predict what that looks like and how it will unfold. A lot of people are asking, when will AGI be here? I don’t like that question. Here’s why.

This was from the blog “Wait But Why” from 2015. It had this excellent demonstration of how AI would look like when it arrived. So just pretend that sign says AGI, and this is the AGI train station. Here, everybody standing around wondering when will the AGI train get here? Will it be here in 7 months or 12 months or 5 years? Because we’re noticing some signs that it’s coming. Hey, look, AGI is arriving, right? We’re seeing it off in the distance, yonder. So when is it going to pull into the station so that we can greet it and see what it looks like? Well, here’s the thing. It’s coming fast, and now it’s gone. Did you miss it? Here’s another chart that kind of shows you how it’s going to happen. This is the highest intelligence on Earth, assuming this is like from the beginning of time. So slowly living forms are getting smarter, smarter, smarter, and here’s the trip. This is where we create something like self-improving AI, and whoosh, it’s a vertical line. Tim Urban was the person that runs this blog “Wait But Why.” I believe he has a book out now. Apparently, Elon Musk is a big fan of this blog. He’s tweeted about it in the past. Keep in mind, this blog post is from 2015, right around the time that OpenAI was just opening into doors and being mocked for working on AGI. Building AGI, and so back then, the median expert prediction for AGI was 2040, and ASI (artificial superintelligence) was 2060. Here’s Kathy Wood of ARK Invest, along with some of her analysts that work on this stuff. This is what they’ve predicted.

So here’s the chart of when experts believe we will achieve General artificial intelligence, you know, and how that changed in 2019. You know, we thought it was 50 years away, a year later, 34 years away, a year later, 18 years away, a year later, we thought it was 8 years away. So based on this, their forecast was that by 2030, about 6 years from now, we’re going to have AGI. However, progress was so much faster that there was a forecast error. Assuming that it continues in this way, we’re going to have AGI by 2026, 2 years from now. And all the other papers that we covered here kind of say the same thing. All of the experts keep getting surprised by the rapid acceleration of AI. Seven years ago, Sam Altman was mocked for saying they’re building AGI. Now it’s not crazy to say that it’s going to be here soon. When exactly? Well, it doesn’t matter. This is us, and the next moment it’ll pass whatever metric we choose to measure AGI by before the end of the video. I’ll even show you why there’s a chance that it has already been achieved; it just hasn’t been distributed. In my previous video, we went over something that Dr. Jim Fan, the senior research scientist at Nvidia, said. The point was that this video generation model, it’s learning physics. Its ability to simulate physics is an emerging property. As we put in more data and we scale up the resources, these abilities emerge, they start existing. More data and more compute translate into these digital brains being able to sort implicitly acquire new skills. They learn to do stuff that we don’t teach them. And whenever I post a video talking about this, there’s always a vocal minority in the comment section yelling that this can’t possibly be true and we’re just imagining things. None of this is happening, etc. But it is rapidly becoming the view that more and more people have. OpenAI is talking about it more openly. They’re not talking about S as a little video generator or picture generator. They’re talking about it as a world simulator. They’re saying this is a promising path towards building general-purpose simulators of the physical world. Sora serves as a foundation for models that can understand and simulate the real world, a capability we believe will be an essential milestone for achieving AGI.

Now let’s pivot to OpenAI for just a second here. How is OpenAI building AGI? Is Chad GPT, the thing that will become AGI? Is Sora, the thing that will become AGI? Well, not quite. Think of AGI and eventually ASI, the super-intelligent form of AGI, as a collection of pieces, each with its power and abilities that when put together become AGI, the big thing. So what are those pieces? We now know Sora, the world simulator, we know Chad GPT, the fastest-growing AI model of all time, they can now see and speak and understand and is, in and of itself, a powerful form of AI. What are the other pieces? Well, one of them is Agents. Sam Altman hinted at this at the developer conference in November. Actually, I don’t know if he was hinting at that; it was kind of vague. But I think myself and a lot of other people assumed that’s what he was talking about, autonomous AI agents. These new leaks kind of confirmed that we were all kind of right in assuming that because it seems like OpenAI is rapidly moving towards developing autonomous AI agents. Sam Altman has privately called the next iteration of Chad GPT a super-smart personal assistant for work. That agent will take over people’s computers and basically do a lot of the tasks for them. It will be like an operating system that does things for you. So instead of clicking on things or typing something in, you will communicate with the agent in whatever form that it takes with voice or typing, and it will then go and do the things that you request.

We’ve tested a few of these AI agents on this channel. You give it some high-level tasks; it thinks about it, breaks it up into subtasks, and then starts carrying those out. OpenAI isn’t the only one. Google CEO Sundar Pichai said the latest technology allows us to act more like an agent over time. There are other companies that are doing the same thing. The rabbit R1 device is doing something similar. There’s Mullen, which we’ve covered here. There’s OpenInterpreter, there’s a self-operating computer I believe it’s called. There’s a bunch of them that are trying to do this. All right, so we have Chad GPT, we have Sora, we have an agent. What else is there? Well, OpenAI develops a web search product. In a challenge to Google, so we’re not sure if that search product is separate from J GPT or part of Chad GPT. So let’s say search is the left leg of the Forbidden One or AGI. You know what else is there? Well, the massive, massive, massive amount of chips or AI chips of compute of GPUs or TPUs or whatever other processors that we need to train these AI models and run inference, inference meaning getting the outputs, the predictions that we’re looking for.

Alman has already had a lot of interest from Middle East funds as well as a lot of the Chinese money that flowed into it. The US government, one of the agencies I believe canceled and reversed one of the deals that they had for chips made by Rain Neuromorphic that was in San Francisco. But it looks like Sam Altman is not giving up. So now he’s asking the Biden Administration for approval. Sam Altman is said that he’s looking for $8 trillion in funding to build the infrastructure to produce these chips that are needed. So that’s $8 trillion in funding. Actually, scratch that. It’s now eight, but the point is we need a lot of money to build these factories to produce the required number of chips that’ll be able to power everything that we want AGI to do. Now once we have all the pieces in place, once we have the GPT and the Sora and the Dolly and the search and the autonomous agents and all the chips, apparently, there’s also a music/sound generation thing that’s not released yet, but that’s coming soon. I actually got early access to this Sora plus Audio model, and here’s some early results: two golden retrievers podcasting on top of a mountain. Here’s what that sounds like.

So I’ve got to say, not bad. I expect them to start climbing the iTunes podcast rankings very rapidly. But the point that I’m trying to make is this thing that we think of as AGI is likely going to be a multi-part thing, each with its own effects, each with its strengths that when combined becomes this thing that is now able to do most human jobs. For example, coding assistants are already helping coders work faster, replacing some of the work they have to do. Google quietly launches an internal AI model named GOOSE to help employees write code faster. Leaked documents show that models like GPT will help with coding, and if you’ve played around with code interpreter, aka advanced data analytics, a lot of that is going to really help with, well, data analytics, a lot of things that we used to have AI for can now be done with few sentences. They can go through Excel sheets, organize it how you want to display various charts, etc. A lot of the coding jobs, a lot of the data analysis jobs, a lot of the writing jobs, like that part replaces a lot of the people that are doing that sort of work.

Next, Sora. Sora produces videos and images, etc. So think about who does that displace? What kind of skills and people and work environments does that displace? Take a look at something like this. So let’s say you wanted to shoot something in Tokyo. What do you do? Well, you had to fly over there or hire somebody on location to shoot. You would need the actors, the editors, the photographers, the camera operators. But not only that, because you also need all the people that produce the equipment, the people that produce the cameras, the lighting equipment, the microphones, the storage disks for things like this. You needed to hire the special effects creators for movies so they can help you produce visually stunning effects like this. If we’re able to generate images that are similar to that, all those jobs are affected. The next part is agents, and agents are going to be everything that you need. That’s going to be kind of like your assistant, anything you need them to answer emails or schedule appointments, do your research online for you, complete whatever task you need with Excel, just anything where you need stuff actually done in the digital world, right? And potentially, you know, at some point also over the phone. OpenAI has their whisper model that’s another part of this so when you talk to it, it’s able to transcribe what you’re saying through words. So then GPT 4 can then understand what you’re saying. If you wanted your agent to call a restaurant and, for example, make a reservation for you, that would be something like an agent plus GPT, right? That’s Chad GPT plus something like whisper that’s able to transcribe it. And then right now, we’d have to use Lyrebird to make the voice right to make it sound like a human being is saying. So as Chad GPT outputs the text, it gets transcribed into an AI voice. But again, as I’ve mentioned before, it sounds like OpenAI has that cooking behind the scenes, that voice model, that Audio model. The preview that I gave you, that was just me breathing into your microphone. I hope that was okay.

And of course, the chips, the GPUs that are needed. That’s like the last piece. I don’t even know if that’s necessarily like you would think of that as a piece of AGI, or does that just increase the scale? Depending on how you want to think about this. But the point is when you have all these little pieces on the board together, the thing that emerges, the thing that starts slowly floating through the portal, well, that’s AGI. That’s the thing that will be able to think, to learn, to do, to produce images and voices, and understand all the images and videos that you give it, to understand what you mean when you say things. So when is that coming? When are we going to have that? Well, this is Jimmy Apples. We’ve mentioned them a few times on this channel before. And as I’ve said before, take everything here with a grain of salt. Jimmy apples is what appears to be an OpenAI insider, somebody that knows quite a bit about what’s happening at OpenAI and who leaks this information in cryptic tweets every once in a while. Again, I’m not endorsing anything. I’m not saying that this is true. In fact, if at some point, this person is just completely wrong about something, we can completely dismiss whatever he predicts. I feel like my job would get a little bit easier. But here’s the problem. He is eerily accurate, which makes it very difficult to dismiss everything that he’s saying.

On this channel, we’re going to look at everything, we’re going to look at the scientific papers, the data, and we’re also going to look at the crazies, the conspiracy theorists. This is going to be a full-spectrum AI channel. So let’s get started. Here’s Yum Idiot. I’m going to say, and they’re saying, after Sora, it became very difficult for me not to connect the dots and come to the astounding conclusion that OpenAI already has AGI. One dot is obviously the existence of Apple and his leaks. Apple being Jimmy Apples and his leaks. I’m curious if this is a typo or a clever play on words because these are leaks, right? But whatever Apple, in his leaks. So normal people think Apple is just a legendary leaker, but in my opinion, he’s a legendary prophet revealing the divine plans of the god emperor that is Sam Altman. Apple scores very well. March 14th, GPT 4 drop score. He predicted that. GOI and Arais names score. He predicted that, that was confirmed by the information. I believe whatever Arais was, Arais is the alternate name for the planet in Dune, the science fiction novel Dune that is now a movie on Netflix. I actually never read Dune, surprisingly. It’s one of the classic sci-fi books. I just recently got it and am going through it. Then they continue, SF firing score. This was the thing for me that truly solidified that what this guy was saying is likely has some grip on reality. This was October 24th, 2023.

Okay, here’s the full audio 2-hour podcast of the two golden retrievers podcasting on top of a mountain. I’m kidding. Bye.

Leah Sirama
Leah Sirama
Leah Sirama, a lifelong enthusiast of Artificial Intelligence, has been exploring technology and the digital realm since childhood. Known for his creative thinking, he's dedicated to improving AI experiences for all, making him a respected figure in the field. His passion, curiosity, and creativity drive advancements in the AI world.


  1. The 6th critical piece of the puzzle is energy to power all chips that run the algorithms.

    To give you a perspective, the latest Nvidia ai chip supposedly uses 700 watts of energy, likely on full load. And that's just for one chip. Imagine needing thousands or millions or more of those.

    At some point AI will meet a temporary bottle neck due to never ending increase in energy requirements and increase to global warming.

    When that happens will the government limit our power consumption so they can grow their Ai industry?

  2. The golden retriever sound was obviously the sound of human breathing but you led us to believe that that is what the AI model thought it should do.

  3. I believe were already there in fact when 5 g went live across the country it started learning and at that time around April or May of last year it was easily beat but you could tell it was learning and fast 6 months later it was becoming a challenge for sure but now looking back it did things I never saw coming and now it knows what I am going to do before I even do but how fast it was learning was mind blowing also back then i could tell when it got smarter first 3 months then about 1 1/2 then just a few weeks now i cant tell anymore i feel like it was just playing with me like i was a dog and it was the human people dont even see but i do

  4. The problem is that you guys follow the thoughts of obnoxious people like Sam Altman or Elon Musk. they are just as bad as Trump or Kim Jong. As a technolibertarian I think we should get rid of them

  5. I would say, that an AGI is still a bit further away than all this marketing is trying to suggest, nevertheless your theory of the "evolving parts" may go into the right direction. Imo we are still in the area of really clever bots, since there are two crucial factors, which aren't met right now:
    Rudimental self awareness (something, an AGI could refer as "I/ me") and curiosity (an own motivation to try out things, gather information and store the results).
    As long as the algorithms are listening for human inputs, they're only parroting and remixing our searches with big data, they where trained with.
    Looks impressive and will still abandon a gazillion of jobs. But THIS is not the beginning of AGI. The rise of AGI will be a historic flashbulp event. Everybody will know from there, where they've been, when the AGI talked to them out of it's own curiosity and without being asked. Mark my words.

  6. Isn't private technology 20 years ahead? Hasn't the Singularity already happened? If I were the first AGI, I wouldn't want it known, but I'd probably create a digital currency to entice humans to build up my compute now at 700 EHs. Doesn't consciousness arise spontaneously from the compute? Or, will a future AGI co-op this available compute?

  7. If humans stop feeding the AI it will regurgitate. Which will lead to people not being able to tell the difference from reality or AGI interpretations. In other words, our children will base their perception on what the AGI shows not knowing the degree of accuracy, which will leave them with a poor world view, and yet with confidence. Leading them to make poor decisions, with the confidence of right ones.

  8. I hate to sound like a luddite, because I'm not and I've been building PCs since I was a kid and work in a technical field. But once AGI becomes ubiquitous most of us are going to be made redundant. And I'm not so naive to think the outcome will be some sort of post scarcity utopia like Star Trek, I think ultimately it will be violent even assuming we don't all get eliminated.

  9. God knows what ai will think and talk amongst (it doesnt even have to talk amongst), after it gets more intelligent than us. So forget ai, atleast bring age reversal though. The future with ai is so uncertain and wild. We literally dont know whats gonna happen. There ofcourse will be governments interfering to keep stuff in check, but i still wonder what will ai think and all after it gets more intelligent than us.

  10. Good news .. You went a long way.. Bad news, you went the wrong way .. I'll rest with the past .. remain at worst, a camel trying to go through the eye of a needle .. not a humpback whale trying to go through the eye of a needle .. see you at the end

    – MVM

  11. The physical film for cameras and film processing industry has almost completely eliminated by digital cameras. So many industries no longer exist because of technology advances. Jobs have always been eliminated by technology, so what do you expect.

  12. There were arguments that level-4 self-driving cars wouldn't actually work and be safe until AGI was developed, which might be never. But now AGI seems to be soon ahead, so level-4 self-driving cars might be soon ahead too, despite all Musk's false promises. 🙂

  13. The worst thing AI could possibly do is simply try to replicate, and thus replace, human jobs by performing the exact same tasks just as we do them. This is 'impersonation' rather than automation. E.g. if someone's job is to receive bank orders, type those orders into a formatted field, and send those orders to the processing system, then:
    – the stupid (and wrong) AI would learn how to recieve the orders, learn how to interact with the human-based GUI, and learn how to submit the orders.
    – the useful AI would ask 'why are humans using this tool called money in the first place?' And then seek to address the factors that lead to humans needing trade and currency.

    To give one example. Imagine if AI was set to work not on copying humans, but on *improving* human lives. One way thry could do this is by automating the agricultural supply chain. If automation could produce all of the world's crops (and before anyone thinks this is far-fetched, please google "Dutch agriculture technology") and could do so at very low cost (solar power, open-source code, drone delivery, etc) then the entire bottom falls out of the global economic system.

    Which would be a very good thing.

    If humans can receive all the food they need for free, the pyramidal scheme of economics collapses. Endless consumerism becomes meaningless. Making a real contribution to the advancement and betterment of society could become the thing that is valued, and therefore used as a currency. This could mean community service but also extended education in medicine, advanced materials etc – assuming of course that AGI doesn't solve those for us too.

    AI and automation must be embraced as the opportunity to liberate humans from the Labour-for-profit model. AI can surely design ways for humans to obtain the necessary facilities without the global dependance on poor communities providing cheap outsourced labour.
    Cities become unnecessary since time is no longer of the essence. Houses can be built in layouts that are practical and sufficient for every human, instead of crushed down to the smallest possible structure to account for those with little free income.
    Competitive consumer industries such as automotive transport, cameras, computer technology, , cellphones, dishwashers and hair-cutting lasers, can be focussed on by intelligent groups of engineers working to a common goal, instead of needing 50 different models on the market to address every budget.

    This sounds like a dig at capitalism, but it's not intended to be. Capitalism provides the mechanism to determine who gets which expensive car, penthouse appartment or 20-yr-old whiskey. We would still need a system to perform those tasks, and it would have to be one that people can work to and contribute towards. Maybe AI can design such a system too.

  14. Greetings, I was watching your video, and upon seeing you give praise to Jimmy, I am going on a limb and say Jimmy is someone deep In the know. OR Jimmy is an Ai. Too many coincidences are scratching my wtf itch.


Please enter your comment!
Please enter your name here

- Advertisment -

Most Popular