So it’s understandable that the new kid on the block, artificial intelligence, has been having some trouble making its presence felt. Yet the so-called ‘godfather of Artificial Intelligence’, scientist Geoffrey Hinton, who last year was awarded the Nobel Prize for his work on AI, sees a 10% to 20% chance that AI will wipe out humanity in the next three decades.

We will come back to that, but let’s park it for the moment because the near-term risk of an AI crash is more urgent and easier to quantify. This is a financial crash of the sort that usually accompanies an exciting new technology, not an existential crisis, but it is definitely on its way.

When railways were the hot new technology in the United States in the 1850s, for example, there were five different companies building railways between New York and Chicago. They all got built in the end, but most were no longer in the hands of the original investors and a lot of people lost their shirts.

We are probably in the final phase of the AI investment frenzy right now. We’re a generation on from the Dot.Com bubble of the early 2000s, so most people have forgotten about that one and are ready to throw their money at the next. There are reportedly now more than 200 AI ‘unicorns’ – start-ups ‘valued’ at $1 billion or more – so the end is nigh.

The bitter fact that drives even the industry leaders into this folly is the knowledge that after the great shake-out not all of them will still be standing. For the moment, therefore, it makes sense for them to invest madly in the servers, data-centres, semiconductor chips and brain-power that will define the last companies standing.

The key measure of investment is ‘capex’ – capital expenditure – and it’s going up like a rocket even from month to month. Microsoft is forecasting about $100 billion in capex for AI in the next fiscal year, Amazon will spend the same, Alphabet (Google) plans $85 billion, and Meta predicts between $66 and $72 billion.

Like $100 million sign-on fees for senior AI researchers who are being poached from one big tech firm by another, these are symptoms of a bubble about to burst and lots of people will lose their shirts, but it’s just part of the cycle. AI will still be there afterwards, and many uses will be found for it. Unfortunately, most of them will destroy jobs.

The tech giants themselves are eliminating jobs even as they grow their investments. Last year 549 US tech companies shed 150,000 workers, and this year they are disappearing even faster. If that phenomenon spreads across the whole economy – and why wouldn’t it? – we can get to the apocalypse without any need for help from Skynet and the Terminator.

People talk loosely about ‘Artificial General Intelligence’ (AGI) as the Holy Grail, because it would be as nimble and versatile as human intelligence, just smarter – but as tech analyst Benedict Evans says, “We don’t really have a theoretical model of why [current AI models] work so well, and what would have to happen for them to get to AGI.”

“It’s like saying ‘we’re building the Apollo programme but we don’t actually know how gravity works or how far away the Moon is, or how a rocket works, but if we keep on making the rocket bigger maybe we’ll get there.’” So the whole scenario of a super-intelligent computer becoming self-aware and taking over the planet remains far-fetched.

Nevertheless, old-fashioned 2022-style ‘generative’ AI will continue to improve, even if Large Language Models are really just machines that produce human-like text by estimating the likelihood that a particular word will appear next, given the text that has come before.

Aaron Rosenberg, former head of strategy at Google’s AI unit Deep Mind, reckons that no miraculous leaps of innovation are needed. “If you define AGI more narrowly as at least 80th-percentile human-level performance [better than four out of five people] in 80% of economically relevant digital tasks, then I think that’s within reach in the next five years.”

That would enable us to eliminate at least half of the indoor jobs by 2030, but if the change comes that fast it will empower extremists of all sorts and create pre-revolutionary situations almost everywhere. That’s a bit more complicated than the Skynet scenario for global nuclear war, but it’s also a lot more plausible. Slow down.


Author

Gwynne Dyer is an independent journalist whose articles are published in 45 countries.

Gwynne Dyer