AI is interesting to experiment with, Bhagpuss. And the ‘blackbox’ part of things is definitely true.

As Jeromai indicates, right now I think there is a lot of experimentation and intuition required to craft ‘effective’ prompts. Some of the AIs, at least on the paid tiers, can provide ‘logs’ or scripts showing how they proceeded (including which references they used) based on your prompt. And that can help folks who are more serious about it than I am to get more robust results.

I currently use ChatGPT almost exclusively as I am paying the monthly fee for it: that means I’m probably missing a lot of factors relevant to the other AIs out there. But I am pretty certain that all of the AIs today have intentional ‘blocks’ or restrictions on their investigative behaviour. This prevents them from doing the kind of deep relational dives based on multiple sources both ‘public’ and ‘nearly public’ (e.g.: services like license, court, or property registries that you have to pay a few bucks each month to peruse) that would likely scare the bejeezus out of most people.

Any professional investigator, particularly anyone in a government anywhere in the world, likely has access to an AI with all of those restrictions removed. This is why establishing somewhat strict laws regarding how our data can be used (like GDPR in the European Union), convoluted and frustrating as they may be, is not really ‘optional’. Sadly no such laws exist across most of the world.