• 0 Posts
  • 38 Comments
Joined 6 months ago
cake
Cake day: February 10th, 2025

help-circle







  • The same way they prevent you from transmitting any other illegal content: they fine you and/or throw you in jail if they know you’re doing it.

    It’s trivially easy to detect encrypted messages just by measuring the entropy of each message. A messaging provider would just turn you in if they detect it.

    You could probably get away with peer-to-peer messaging, but your ISP would be able to detect that you’re using unapproved encryption and then turn you in to the government.




  • This is just a tech bro pipedream, there is zero support for this idea in academia. It’s just cynical stock manipulation, that or Zuck is drinking his own Kool-Aid.

    Language models are not intelligent, they’re basically a search engine that doesn’t know what is true or real. We have no idea how to handle knowledge or first order logic in neural networks and language models are so complex that we can’t even examine how they store what information they do have.

    This isn’t a problem that can be solved by increasing the parameters and throwing compute at it, it would require multiple Nobel Prize-tier breakthroughs in the field.

    Anyone throwing money at this expecting intelligence, much less superintelligence, is being taken for a ride.



  • What exactly do you envision as an endgame to that line of thinking?

    Neural network-based machine learning isn’t a technology that’s going to be uninvented. Clearly people are using it and it is effective in a lot of fields (AlphaFold, for example).

    Do you think that you can be rude and abrasive enough that the entire world collectively forgets Transformer-based models exist? What is your idea of a perfect ending here?

    It just doesn’t make sense.

    Yes, generative models suck and their output is bad a lot of the time. That’s a reasonable take.

    Declaring that you’re done with politeness and professionalism when discussing machine learning is an extremist viewpoint. If it’s not a topic that you’re willing to engage in good faith conversation about, what is left? Violence?


  • Just for context, this is the developer that spectacularly crashed out of the kernel team after Greg KH kept rejecting his attempts to submit new features to a release candidate.

    When his PRs were rejected and he was told to submit them in a later version he turned to social media brigading and attacking the kernel team. He did eventually get a response from Linus though…

    https://www.theregister.com/2025/02/07/linus_torvalds_rust_driver/

    In response to Asahi Linux lead developer Hector Martin’s call for Torvalds to “pipe up with an authoritative answer” to resolve the device driver impasse, and Martin’s defense of “shaming on social media” as a way to counter the hostility of Linux maintainers to Rust code, Torvalds dismissed the approach and took aim at Martin.

    “How about you accept the fact that maybe the problem is you,” said Torvalds

    It’s completely unsurprising to see this kind of unprofessional post from him where he confidently repeats the same copy pasted uneducated takes and misinformation on generative models.


  • This is how I do it. I’ll see something and think ‘hmm, interesting’ and completely forget any of the details but I’ll remember vaguely that something exists then I can search for it.

    Language models are pretty good at solving the ‘I think I remember something that does this specific thing but don’t know where to look’ kinds of problems (don’t just blindly run LLM generated commands, kids). Then once you have a lead, traditional searching is much easier.