I think you are on the right track, Bhagpuss. AI itself as it exists today isn’t a net ‘evil’, but it can definitely be used for bad purposes. Distinguishing between the two is ‘hard’, so some people just flat out reject any use of AI in all its forms. But as you suggest, I think in time we will become more accepting of its use within certain boundaries.
I’m still experimenting, figuring out my own personal “AI tolerance”. I feel like it comes down to intent and possibly a matter of degree with lots of room for grey areas. I’m using ChatGPT enough that I decided to at least temporarily sign up for the ‘paid’ subscription so I can play a bit more without running into the limitations of the ‘free’ offering.
If I purport to write a blog, then I feel I should not use AI to generate the body of my posts. And since I don’t purport my blog represents my skill as a visual artist and I’m not creating anything for sale, I feel it is okay to use AI to generate ‘filler’ or ‘illustrative’ images.
But is it okay to use AI to vet what I write and suggest improvements? How about helping me with ideas for my titles, which is something I often struggle with? Those seem okay, but how do I define that ‘line’ distinctly?
I was reading an article earlier today about the NaNoWriMo writing event and the uproar over the rules being clarified to allow for use of AI. I perceived the intent of the change was to allow for AI act as an ‘editor’ for grammatical and sentence improvements, but that vary intent was considered almost ‘obscene’ by some of the participant writers. I sort of get their point, but it seems a bit wrong-headed as well. It makes me think even more about ‘right’ and ‘wrong’ with AI use.