I got a little curious about local LLM hosting I’ll admit, and was playing with a few models to look for obvious censorship etc.
I’ve tended to make some assumptions about Deepseek due to its country of origin, and I thought I’d check that out in particular.
Turns out that Winnie the Pooh thing is jut not happening.
😂
The Deepseek distills will do pretty much anything you want in completion mode. Just modify their chat template.
My impression of Chinese models going back to Yi is that they’re more uncensored in English modes in particular, but much more hesitant answering in Chinese characters.
…That being said, Deepseek 14B is basically obsolete now. There are much better models in that size/speed class depending on your hardware, including explicitly uncensored ones.
I’ve got fairly low end hardware, this was just easily installable via gpt4all so I gave it a shot. :)
I will play with some others…
Thanks!
If you mean like a laptop, look for MoE models like Qwen3 A3B. And pay attention to sampling, try low or zero temperature first.
Thank you! I do, and I will do!