AI in 2026

Of all the hot takes on what will happen to AI in 2026, I'm drawn to this one from Philip Ball:

It’s unlikely that 2026 will be a make-or-break time for AI – many aspects of it are here to stay – but there could be turbulence for the industry, particularly if the investment bubble bursts, as many anticipate. Optimistically, a market correction – more properly, an awakening from the hype and fantasy – could recalibrate our perceptions, exposing the bluster of AI CEOs on “superintelligence” and restoring a measured appreciation of how, in specialised applications, AI could be a valuable tool. 

Sounds about right.

* * *

Uploaded image
A sentiment I don't agree with in my local park. I reads "Say 'no' to AI" and I had to use AI to decode what it says underneath: "humans do not need it" and "only oligarcs do".

Personally, I'm saddened that AI appears to have become synonymous with LLMs. Not because of the technology—I'm amazed, in a good way, that so much is possible with the neat tricks of using a lot of data and compute.

No, it's that there's more to AI than LLMs. The failings of LLMs are just that, and the field will find better ways.

But I understand that's the way the cycle works. When I did my undergraduate degree in AI in the mid-1980s, soon after there was a sort of bubble burst—an AI winter that closed down opportunities. 

Rightly, as nothing quite worked. 

Until they did again, and here we are.

I don't know what will be next. I like neuro-symbolic AI, open-endedness, and perhaps a little bit of environmental intelligence would be a good idea.

My own interest is more in Cognitive Science ("mind as machine").