Please read the comments in the cross post link. Article isn’t trustworthy.
I fell for it, but haven’t actually used it yet. I’ll stick to Deepseek, at least I know where they stand + they are not the ones that can use my data against me and probably won’t hand them over to western governments (mine)
of course; andy’s gotta send all the queries to the white house somehow.
Yuuuuup I had the same thought. I would never trust Proton after this shit. I cancelled our account
It’s so weird to see a company with such strong and straightforward brand chasing trends in such a sloppy manner.
They are unfocused. Spreading themselves thin over many products instead of focusing on the core that got them to where they are.
It’s a shame to see.
It’s not, though. Some users want this kind of stuff. Proton has users who just want to DeGoogle and aren’t nutters that want to buy groceries with Monero.
Plus, this article is straight up misinformation. Proton lists out the LLM models they use: Mistral, OpenHands, and OLMO. Which are open source.
The article reads like AI slop, and seems to have no idea how LLMs actually work. There are no LLMs that process encrypted tokens. The article makes it sound like an impossible thing that doesn’t exist is a reasonable expectation.
There are no LLMs that process encrypted tokens.
Check out homomorphic encryption! AFAIK it’s not used in any LLMs just yet but the plans are in place and it’s tantalizingly close
Weird way to write, “You’re right, no LLMs process encrypted tokens.”
And Meta and OpenAI have DOD contracts, and Palintir uses ChatGPT, so I’m sure they’re working on that as fast as they can.
Doesn’t really matter if the tokens are encrypted if the container is
To be clear, I wasn’t saying you’re wrong. I just like homomorphic encryption a lot and love a chance to tell people about it lmao
Thanks, to go back to the source https://proton.me/blog/lumo-ai which claims :
- No logs
- No data sharing
- Zero-access encryption
- Not used to train AI
- Open
… so I’m not claim it’s efficient, even useful, but at least it does seem pretty coherent with the values that one (like me, being a Proton customer for years) would expect out of the company.
Edit : I can not find the repository so AFAICT it’s not actually open source, even though they do list the core of it, namely models which are open.
Here’s where they say the models are Mistral, OpenHands, and OLMO are the models. The blog post isn’t the only documentation they have.
Apparently no one knows how LLMs work. I’ll give credit where it’s due, that Proton seems to have a setup that is as good as it can possibly get for anyone who can’t run an LLM locally. Since no one seems to know what that means either, “locally” means you have to run a LLM on your machine.
All LLMs - all of them without exception - process tokens as cleartext. There is no LLM anywhere that can process encrypted tokens. So this is a limitation of the fundamental architecture.
What Proton seems to have is TLS encryption of your text your write > the LLM context window, where Proton has sort of “removed their own access” from that step. The LLM processes and responds, all with the TLS tunnel to you as the only in/out. Proton servers process the tokens, and once the conversation is done, it all gets flushed. It’s not even that hard of a thing to set up conceptually. But it does rely on the same level of trust in Proton as any of their other services. But hey, if they keep passing security audits, that’s reasonable trust to have.
My only gripe is that Mistral doesn’t get the level of investment and development that Big 6 AI companies get, so it’s like GPT3.5 level on a good day. Well, and my second gripe is that a “basic, but slightly better” tier isn’t added in for paid users, and that it’s a standalone add-on.