That’s not haywire. We already know AI makes stuff up and gets stuff wrong all the time. Putting it in an important position doesn’t make it any less likely to make mistakes - this was inevitable.
Well that’s fucked. Time fo shut AI down. Fucking creating terminators here.
I think that’s throwing out the baby with the bathwater. But regulation and hitting these companies with false advertisement penalties is something we need ASAP. And liability. If the creator of a model can be made liable for damages, that would pump the breaks on this bullshit very hard. Funny how all the so-called AI companies are averse to any of that …
To bad our president just signed an EO basically preventing all of that. But I agree we need regulations on this five years ago.
Well, not so much preventing as actively inhibiting. Technically, that order only applies to models used by the federal government, but it does create a perverse business incentive that I suspect is highly likely to have a chilling effect on the industry as a whole.
But I agree things are accelerating in the wrong direction, and regulations were needed years ago to prevent the upcoming shitshow.
LLMs should never be used for therapy.
That’s what you get for trustnig an AI more than a professional.
It’s prohibitively expensive to get proper therapy, and that’s if your therapist has an opening in the next six months.
So it is better to use an AI therapist that suggests suicide?
If phrased like that obviously not, but that’s now how those things are marketed. The average person might just stumble upon AI “therapy” while googling normal therapy options, and with the way the media just repeats AI bro talking points that wouldn’t necessarily raise red flags for most “normal” people. Blame the scammer, not the scammed.
You can watch the video Cealan made about it
Dr. Sbaitso never asked me to commit atrocities.