The BDN Opinion section operates independently and does not set news policies or contribute to reporting or editing articles elsewhere in the newspaper or on bangordailynews.com.
I’m sorry that I didn’t get my column on artificial intelligence in during the recent “AI Safety Summit” at Bletchley Park, the historic World War II decoding center in England. I got distracted by some other stuff that was happening in the Middle East.
The other reason for the delay was that whenever I looked at the video of Rishi Sunak, prime minister of Britain, sitting awestruck at the feet of Elon Musk and saying things like “Given that you are known for being such a brilliant innovator and technologist …,” I would collapse into helpless giggles.
Some people claim that Sunak was pitching for a job with Musk once he likely loses next year’s election and is defenestrated by his own Conservative Party, but that’s unfair. Sunak doesn’t need a post-politics job; his father-in-law owns half of India. He’s just an awkward nerd who apparently wishes that he too was a tech bro.
Anyway, the topic at Bletchley Park was AI. Between U.S. President Joe Biden’s announcement of a U.S. “AI Safety Institute” and Sunak’s “AI Safety Summit” (graced by U.S. Vice President Kamala Harris, King Charles III and Elon Musk), a lot was said about artificial intelligence. Most of it was nonsense.
Harris went for profundity: “Just as AI has the potential to do profound good, it also has the potential to do profound harm.” That’s equally true of drugs, money and sharp knives. She’s still not ready for prime time.
King Charles thought that “The rapid rise of powerful artificial intelligence is no less significant than … the harnessing of fire.” At the risk of committing lèse-majesté, one must reply: No it isn’t, and besides it hasn’t even happened yet.
Musk, never at a loss for words, opined that AI is an “existential threat” because human beings for the first time are faced with something “that is going to be far more intelligent than us.” It was a jamboree of the trite and the portentous.
These deep thinkers were all banging on about existential risk, but that is a contingency that would only arise if the machines were endowed with something called “artificial general intelligence,” that is, cognitive abilities in software comparable or superior to human intelligence.
Such AGI systems would have intellectual capabilities as flexible and comprehensive as those of human beings, but they would be faster and better informed because they could access and process huge amounts of data at incredible speed. They would be a real potential threat, but they don’t exist.
There is not even any evidence that we are closer to creating such software than we were five or 10 years ago. There has been great progress in narrow forms of artificial intelligence, like self-driving vehicles and automated legal systems, but the only threat they pose, if any, is to jobs.
The “large language models” the chatbots are trained on make them expert in choosing the most plausible next word. That may occasionally produce random sentences containing useful new data or ideas, but there is no intellectual activity involved in the process except in the human who recognizes that it is useful.
There is plenty to worry about in how “smarter” computer programs will destroy jobs (now including highly skilled jobs), and also in how easy it has become to manipulate opinion with deep fakes and the like. But none of that needed a high-profile conference at Bletchley Park.
So why did they all go there and wind up talking about existential threats? Well, one possibility is that the leaders of the tech giants wanted to make sure that they were in on the rulemaking from the start, for there will surely be new rules made about AI over the next few years.
Most of those rules will be about mundane commercial matters, not about threats to human existence. You might feel that it would be inappropriate for the people who will be making money from these commercial activities to be the ones making the rules.
On the other hand, they should certainly be involved in decisions about any existential threats arising from their new technologies, so tactically it makes more sense for them to steer the discussion in that direction. They’re not stupid, you know.