You are right, Bhagpuss, that AI will remain useful until it isn’t. You are also completely correct that AI isn’t just generative large language models.

The problem right now is that basically the only AI game in town, the only technique with hundreds of billions of dollars of investment, is generative LLM AI. And the current methods used by all model ‘owners’ involves basing each new model on the previous model plus whatever it can scrape from the internet. The ‘whatever it can scrape from the internet’ part of things is being rapidly polluted with AI generated material that cannot be distinguished with any reliability from ‘clean’ sources.

As the articles I reference say, model collapse seems inevitable unless something changes, and rather quickly. That collapse is likely the ‘tipping point’ you refer to, the point where the results become so full of falsehoods and made-up garbage that they become effectively useless.

Right now it seems as if the big AI players are afraid to talk about model collapse. This fear makes sense because they are riding high on hundreds of billions of dollars in investment, and talking about a seriously critical problem that currently has no solution might just cause the money train to grind to a halt. The real ‘fix’ might be to come up with AI that actually has some intelligence, that can properly assess ‘right’ from ‘wrong’, ‘good’ data from ‘bad’, ‘truth’ from ‘falsehood’. But today’s generative AI has absolutely no such discrimination.

The real problem is that the vast majority of people won’t even notice the difference caused by model collapse. They will continue to assume that AI is ‘correct’ as it provides them with increasingly wrong but pleasing answers, possibly very subtly wrong solutions, and completely made up references that are hard to prove false because they refer to other AI generated falsehoods.