The Gist

  • Anticipation fuels AIApple’s AI advancements at WWDC drive market excitement and stock value increases.
  • AI myths debunked. Despite rapid advancements, human-level AI remains a distant reality, not as imminent as perceived.
  • Apple’s AI independence. Apple develops in-house language models, emphasizing capability over collaboration with OpenAI.

The scene at Apple’s WWDC (Worldwide Developers Conference) last week was, in a way, emblematic of the times. The tech giant’s AI announcements were massively hyped ahead of the show. The products themselves were interesting, but not the “next iPhone” some expected.

Still, the market loved it, adding hundreds of billions to Apple’s market cap in just a few days. 

A good part of this AI moment is built on anticipation: There’s a belief that models will keep improving, products will get better, and people and companies will buy in. We don’t know for sure where it’s all leading, but it’s heading somewhere, and that seems to be good enough. The demos should work eventually. 

I spent the week in Silicon Valley visiting sources and tech companies — starting at Apple and ending at NVIDIA — to get a sense of where we are on the continuum, who’s poised to lead and how power is shifting. Much of what I learned will land in future stories and Big Technology Podcast episodes. So stay tuned. But here’s what stands out in my notebook:

Room for AI Models to Improve

It doesn’t appear generative AI will hit the resource wall anytime soon — at least according to those closest to the work. AI research houses are focusing on constraints like compute, data and energy. But they also realize there’s room to improve the current set of models by getting better at selecting the right data, fine-tuning the models and building new capabilities like reasoning. Meanwhile, incoming compute improvements should lead to more powerful and efficient training and inference.

The next 18 months will be interesting.

Related Article: Tim Cook’s AI Moment: A Pivotal Shift at Apple

Expectations Might Still Be Unrealistic 

Still, the popular conversation around AI tends to portray human-level artificial intelligence as right around the corner. It’s not. The next generation of models will be impressive, but the release of ChatGPT (launched on a version of OpenAI’s GPT-3) followed soon after by the release of GPT-4 made the pace of AI development seem, to many, faster than it is. Training and fine-tuning these models takes a long time.

So, while GPT-5 and its peers will be hyped and hotly anticipated, the push toward reasoning and AI agents may be more tangible in the short term vs. sheer model size. 


Please enter your comment!
Please enter your name here