I got a little curious about local LLM hosting I’ll admit, and was playing with a few models to look for obvious censorship etc.
I’ve tended to make some assumptions about Deepseek due to its country of origin, and I thought I’d check that out in particular.
Turns out that Winnie the Pooh thing is jut not happening.
😂
Sadly you can’t expect all that much with small language models like what you and I can run on our own hardware. The full deepseek is pretty great, although it wouldn’t answer your question on Xi either.