Just another reddit refugee

Avatar/PFP by TmiracleART

  • 0 Posts
  • 7 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle
  • You misunderstood: current gen isn’t getting price drops while previous gen usually did. Current gen PS5/Pro and Xbox Series S/X are all actually more expensive now factoring in inflation (excluding the impact of the tariffs) than at launch. Since the Switch 2 literally launched two months ago, we can’t really talk about price drops for it, so we compare the Switch 1. The article headline is correct, and all of this is in the body of the article.




  • second, more attention is given to the last bits of input, so as chat goes on, the first bits get less important, and that includes these guardrails

    This part is something that I really can’t grasp for some reason. Why do LLMs like…lose context the longer a chat goes on, if that makes any sense? Especially context that’s baked into the system prompts, which I would would be a perpetual thing?

    I’m sorry if this is a stupid question, but I truly am an AI luddite. My roomate set up a local Deepseek server to help me determine what to cook with what’s almost expired our fridge. I’m not really having long, soulful conversations with it, you know?


  • You laid it out so well, wow.

    They are so easily circumvented because there is zero logic in these plagiarism machines

    and

    Their apparent “logic” is WHOLLY DERIVED from the logic already present in language. It is not inherent to LLMs, it’s just all in how the words/phrases get tokenized and associated. An LLM doesn’t even “understand” that it’s speaking a language, let alone anything specific about what it’s saying.

    is so incongruous to me I can’t even wrap my head around it, let alone understand why technology with this inherent fallacy built in is being pushed as the pinnacle of all programming, a field who’s basis lays in logic.


  • Ngl as a former clinical researcher putting aside my ethics concerns, I am extremely interested in the data we’ll be getting regarding AI usage in groups over the next decades re: social behaviours, but also biological structural changes. Right now the sample sizes are way too small.

    But more importantly, can anyone who has experience in LLMs explain why this happens:

    Adding to the concerns, chatbots have persistently broken their own guardrails, giving dangerous advice on how to build bombs or on how to self-harm, even to users who identified as minors. Leading chatbots have even encouraged suicide to users who expressed a desire to take their own life.

    How exactly are guardrails programmed into these chatbots, and why are they so easily circumvented? We’re already on GPT-5, you would think this is something that would be solved? Why is ChatGPT giving instructions on how to assassinate it’s own CEO?