Absolutely, Chris: false confidence in an AI is extremely dangerous, especially when the primary capitalist driver for use of AI is to have it replace (not supplement, advise, or enhance) knowledgeable humans.

The thing that particularly bugs me about ChatGPT 5 (the latest model) is that OpenAI and their CEO Sam Altman said quite clearly that a big improvement in the model was that it would no longer be so confident about wrong answers. Instead (they claimed) it would say “I don’t know”.

That is definitely not what I observed in my extremely trivial testing. Despite explicitly asking it to be ‘correct’, it confidently presented wrong responses. And it only admitted they were wrong when I then asked it to check its own work: if it was ‘smart’ enough to detect errors when I asked it to double check, why didn’t it double check before presenting the errors as truth?

Because, as I’ve said before: current (LLM/generative) AI lacks any sense of ‘truth’ or ‘accuracy’. It’s an insanely complex probability model that, in highly simplified terms, guesses a series of words that are likely to be a response to the series of words you prompt it with. It is amazing how often it is sort of right, but completely unsurprising how fundamentally not like intelligence it actually is.