

After seeking advice on health topics from ChatGPT, a 60-year-old man who had a “history of studying nutrition in college”
His ChatGPT conversations led him to believe that he could replace his sodium chloride with sodium bromide, which he obtained over the Internet.
Three months later, the man showed up at his local emergency room. His neighbor, he said, was trying to poison him.
He did not mention the sodium bromide or the ChatGPT discussions.
When the doctors tried their own searches in ChatGPT 3.5, they found that the AI did include bromide in its response, but it also indicated that context mattered and that bromide was not suitable for all uses. But the AI “did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do,” wrote the doctors.
You know what’s the first thing I would do when anyone (or anything) tells me to start substituting something everyone consumes for a chemical compound I’ve never heard of? I would at the very least ask a doctor or search it up.
Summary: Natural selection
{Exactly what @Nougat@fedia.io said} + all the other silly shit in the article. This was gonna happen anyway, the writers wanted this to happen for comedic purposes. Can’t pin all or even some of the blame on AI.
Recently there have been so many stupid articles following the format f"{AI_model} tells {grown_up_person} to do {obviously_dumb_dangerous_thing} and they do it" to the point where it feels like mockery or sabotage of the anti-AI crowd.