Record numbers of people are turning to AI chatbots for therapy, reports Anthony Cuthbertson. But recent incidents have uncovered some deeply worrying blindspots of a technology out of control
So, it’s not ChatGPT, it’s all LLMs. And the people who go through this are using AI wrong. You cannot blame the tool because some people prompt it to make them crazy.
But you can blame the overly eager way it has been made available without guidance, restriction, or regulation. The same discussion applies to social media, or tobacco, or fossil fuels: the companies didn’t make anyone use it for self destruction, but they also didn’t take responsibility.
Kitchen knife, razor blades are a different category, for self-help books also. LLM is completely different category and there is no point of comparing knife to an llm besides to do a relativization.
The tools are relative. Pick a tool. It can be used wrong. You are special pleading, dogmatism, intellectual dishonesty.
If you’re going to refuse entire categories of tools then we are down to comparing AI to AI, which is a pointless conversation and I want no part of it.
So, it’s not ChatGPT, it’s all LLMs. And the people who go through this are using AI wrong. You cannot blame the tool because some people prompt it to make them crazy.
So what is the correct usage?
But you can blame the overly eager way it has been made available without guidance, restriction, or regulation. The same discussion applies to social media, or tobacco, or fossil fuels: the companies didn’t make anyone use it for self destruction, but they also didn’t take responsibility.
Kitchen knife manufacturers, razor blades, self-help books, Helter Skelter, the list of things that people can “use wrong” is endless.
PEBCAK
Kitchen knife, razor blades are a different category, for self-help books also. LLM is completely different category and there is no point of comparing knife to an llm besides to do a relativization.
The tools are relative. Pick a tool. It can be used wrong. You are special pleading, dogmatism, intellectual dishonesty.
If you’re going to refuse entire categories of tools then we are down to comparing AI to AI, which is a pointless conversation and I want no part of it.
Okay, now imagine the tool is advertised in a way that tells you to use it wrong.