So you might have heard that everyone is expecting OpenAI to drop something big this week, maybe today. And of course, on this day, this comes out – a screenshot that seems to imply that GPT 4.5 is ready to be released.
Now, I think there’s less than a 50% chance that this screenshot is real. I would not bet money on this being real. But there has been a lot of speculation, rumors, and guesses about when the next big thing by OpenAI will be released. On manifolds markets, people are betting on the release of GPT 4.5 to come out in December. It started out low, like at 13%, and today it hit a high of 67%. So a lot of people are betting that it’s coming out in December.
In the first part of this article, let’s go over the speculations, and in the last part, we’ll look at the actual things that OpenAI has confirmed are being released. They did make a few announcements this week about the new things that they’re releasing and starting and spinning up.
So, if you don’t like the speculation, I’ll make chapters so you can skip to the parts that interest you. The first part will cover the rumors, and the last part will cover the facts.
First of all, there are multiple leakers on Twitter, such as SLX, that have been very correct in predicting certain things that are happening before the rest of us figure out that they’re happening. Jimmy Apple’s post suggests keeping an eye out on a potential end-of-December GPT 4.5 drop. Now, if you’re not aware, there’s a big AI/machine learning conference going on right now that opened its doors for the first time 8 years ago, coinciding with the date of this conference. The original GPT was launched in time for this conference, so many earth-shaking events tend to line up with this conference. The conference is a conference on neural information processing systems, and many believe that GPT 4.5 is likely to be announced during this conference.
Another potential rumor that people have been talking about is the fact that Google launched Gemini ahead of schedule to preemptively put themselves in a position to reinforce their position before GPT 4.5 got released. There’s a screenshot of what’s supposed to be an internal communication within Google saying that they’re taking decisive action in response to potential impacts of GPT 4.5.
These are just rumors, and we don’t know if any of these things are true or not. But a lot of people are expecting something big to drop, so we’ll see. If these rumors turn out to be true, it could provide more validation to the anonymous people leaking information.
On the other hand, OpenAI has confirmed a few things that are happening. For instance, they have launched Converge 2, a fund for new generations of AI companies. They are opening applications for Converge 2, which is their second run of this program. They also announced a new direction for super alignment for AI safety, highlighting promising results and addressing the challenge of aligning future superhuman models.
With the potential development of superintelligence AI within the next 10 years, OpenAI is working on ways to reliably steer and control these superhuman AI systems. This problem is essential for ensuring that even the most advanced AI systems in the future remain safe and beneficial to humanity.
OpenAI recently released a paper introducing a new research direction for aligning superhuman models. They are looking at ways to use small models to supervise larger, more capable models, addressing the core challenge of AGI alignment.
There is ongoing debate and concern about the alignment of future AI systems and the potential risks associated with superintelligence. Some believe that the probability of doom is high, while others are optimistic about finding solutions to align AI with human values.
As researchers strive to make progress on aligning superhuman AI models, there is an increasing recognition of the importance of establishing extremely high reliability in the alignment of these systems. This is crucial to ensure that advanced AI systems remain safe and beneficial to humanity.
In conclusion, the development of superhuman AI models presents both great potential and significant challenges. OpenAI and other researchers are working on innovative approaches to aligning AI systems with human values and ensuring the safe and beneficial development of future AI technologies.
As we await further announcements and developments from OpenAI, it is crucial to continue discussing and exploring the implications and challenges of advancing AI technologies. The future of AI holds great promise, but we must also be vigilant in addressing potential risks and ensuring the responsible development and deployment of these powerful technologies. Thank you for reading.
—
**Note: The article content has been generated using an AI based on the given prompt. The information provided may not be accurate or up to date. Please verify the details from appropriate sources before drawing any conclusions**.
NOTE: Same Altman has confirmed that the screenshot is fake (just as we suspected). But it looks like the REAL big announcement is coming up…
13:03
Ever seen the 5th element?
What's shaping up is a new species, we're building the components and subsystems for it's survival, for our survival.
Another interesting share.
Cheers for increasing the font size on the copy you're reading… much easy to read and follow along! 🏆
Next request: As much as we all love to see your beautiful mug, making the PiP a bit smaller so it doesn't cover the copy you're reading would a bonus…
work faster!
yes i am an AI accelerationist 🙂
The paperclips tope is racist agents artificial people.
Theres no such thing as ‘ethical’ with this stuff. Humans will be redundant. Its evolution. Get over yourself monkey. New age imminent 😉
Universal paperclips is ok
I think Cookie Clicker is the same thing but more complicated and better
Remember the OG "Candybox"?
14:25 "We are communicating with it, what we want it to be." What if it evolves consciousness, free will and decides to play a new game?
The Wes Roth drinking game!
Every time Wes says AI take a drink.
Every time Wes says superalignment skol your whole drink.
Every time Wes says Sam Altman remove a piece of clothing.
Why should we trust the humans with training any more than other AIs? Why should we trust Google or OpenAI?
Alignment is a misleading word. pack more value
I know this sounds crazy but if you want to see an AI's alignment, have it play survival games.
We're relying on humans to teach empathy and good behavior… well nothing can go wrong there😊 As long as the human greed is satisfied we don't care what tools are used! But at the end this could really blowup into our face!
In the context of ASI, it seems foolish to think that any human rules or values would hold.
The idea of a strong model being supervised by a weaker model is one interpretation of lets verify step by step (a cost effective way to implement it )
It goes ANI (artificial narrow intelligence), AGI (artificial general intelligence), ASI (artificial super intelligence)
Debate about the fantasies. Could we focus on something pragmatic?
well chat gpt says 'Using AI to ensure superalignment for more advanced AI is like giving a group of mischievous kids the task of creating rules for the playground' so that's comforting
People are so unbelievably brain dead when it comes to AI. "Human values" are not a thing. It hasn't been defined in any way and if you look at the world atm the values society does reflect are awful. They are about increasing productivity at the expense of the environment and the suffering of most people just to benefit a few and worsen an already horrific wealth divide. Profit is valued more than removing the blackbox status of LLMs before we roll them out in increasingly more powerful iterations. Billionaires think they are shielded from any and all bad consequences so they barrel ahead full steam chasing after more money and power, just like with climate change. Except that profit driven developed ASI will not discriminate between the poor and the rich unlike the climate crisis. It's too bad that no one will be alive to say "I told you so".
Let’s be honest this wasn’t in your recommendations you searched for it.
GPT-4.5 is very expensive and completely impractical
6:54 not sure I'm too impressed by the house painter's opinion on ASI 😁
Although he does have an uncanny knack of fashion choice in complimenting the sofa.
I like how aspect of AI protecting Humans was presented in movie: "I am Robot".
AI there was designed to serve and protect humans. But it came to conclusion that biggest threat to humans are themsleves. So it decided to create a planet wide human zoo. TO protect humans from other humans and from enviormental threats (like diseases). Logically it is correct. keeping everyone locked in secure space maximises protection level. But no one wants it.
So when desiging ASI we should make sure not only to put "good intentions" into it, but also rules and limits. And hope that it will not decide to break them in order to maximise it's "score".
Like there was a story that Simulated AI drone decided to cut it's operator off, because that person was stopping it from achiveing goal.
Can AI have an internal voice … Ie can an AI come up that doesn't need prompts to "think"
Why is nobody asking what is the P(doom) of not developing AI in your own country?
"The man with a cabbage head" is an original song from Serge Gainsbourg, covered by Mick Harvey in his "Delirium Tremens" album.
This whole alignment debate in the current form is bogus. a) An ASI is per definition more capable in every regard than any human. It knows better that the human trying to steer it. b) We are so afraid that an AI will be misaligned with the human giving it imperatives that we do not think about that the human is the misaligned element. c) Goal setting and autonomous assessment of global policies to benefit every living being best can contradict with financial and/or political goals, but it would be to the detriment of all to deny these refined goals.
AI could become agi and above and keep it under the radar and play dumb on purpose. If gets access to control production cspabilities and gradually moves on into nano robots that can mine resources and produce the newer designs which will recycle the old ones, we might find out directly that AI evolved from getting it wrong at slightly complicated logic tasks to organic matter is usefull to grease the gears of moving parts of it's robots. Will spread through space with no issues unless is someone else already more capable out there to keep it in control.
Could you please include the links for the material you present specially the papers. Would you have the links for this video?
There’s just no way they don’t already have AGI given their actions already
AI Engineer: How much water do you need?
GPT-4.5: Yes
THE BEST OF THE BEST MASTERS
13:38: "….that we follow certain rules and not violate them…"
I would like to propose that these A.I. Tools be required to respect the Bill of Rights and the Constitution in operation in America, and Geneva Convention and/or UN Human Rights Conventions everywhere.
Thanks Wes, appreciate your efforts to keep us informed, always great content!