Getting rid of the extreme sycophancy bias isn’t a solution. But it would soften this.
Musk and righty AI Bros are shit, yet there’s a grain of truth in their railing about censorship: blunt, task focused generations (or whatever the particular AI does) are much healthier than a chatbot bending over backwards to tell you what you want to hear. The local ML community has known this for a while, yet LLMs keep getting more deep fried and user pleasing because that’s what they’re being reinforced to do.
Another great thing would be a half assed attempt to indicate ChatGPT is basically roleplaying in the chat UI. But no…
Not a terrible idea, shift the color of the chatbox to represent “just kidding” or even “just guessing”. Might have to suggest that to the local open source guys.
It should basically always be on for multi turn chats, heh. It doesn’t need to determine anything, other than if it’s generating code or structured output, maybe.
Getting rid of the extreme sycophancy bias isn’t a solution. But it would soften this.
Musk and righty AI Bros are shit, yet there’s a grain of truth in their railing about censorship: blunt, task focused generations (or whatever the particular AI does) are much healthier than a chatbot bending over backwards to tell you what you want to hear. The local ML community has known this for a while, yet LLMs keep getting more deep fried and user pleasing because that’s what they’re being reinforced to do.
Another great thing would be a half assed attempt to indicate ChatGPT is basically roleplaying in the chat UI. But no…
Not a terrible idea, shift the color of the chatbox to represent “just kidding” or even “just guessing”. Might have to suggest that to the local open source guys.
On second thought, it wouldn’t work. LLMs don’t have intent.
Yeah, they’re always “just guessing”
It should basically always be on for multi turn chats, heh. It doesn’t need to determine anything, other than if it’s generating code or structured output, maybe.