• ricecake@sh.itjust.works
    link
    fedilink
    arrow-up
    29
    ·
    7 days ago

    The name and presentation of that site has a veneer of legitimacy, but it really doesn’t seem credible.

    I warned about this for the past 3 years. The WHO wants universal mental health care and to drug at least a billion of us.

    Do Viruses Exist?

    There’s also a lot of general antivax stuff.

    Now, sharing a lot of … Questionable articles… Doesn’t make the article in question invalid. It does however call into extreme doubt any editorial context the site might be adding.

    https://arxiv.org/pdf/2506.08872

    This is the actual study being referenced. It’s conclusions are significantly less severe than this presents them as, while still conveying “LLMs are not generally the best tool for facilitating education”.

    trade-off highlights an important educational concern: AI tools, while valuable for supporting performance, may unintentionally hinder deep cognitive processing, retention, and authentic engagement with written material. If users rely heavily on AI tools, they may achieve superficial fluency but fail to internalize the knowledge or feel a sense of ownership over it.

    from an educational standpoint, these results suggest that strategic timing of AI tool introduction following initial self-driven effort may enhance engagement and neural integration. The corresponding EEG markers indicate this may be a more neurocognitively optimal sequence than consistent AI tool usage from the outset

    Ultimately, this isn’t saying AI tools cause brain damage or make you stupid. It’s saying that learning via LLM often causes worse retention of the information being learned. It also says that search engines and LLMs can remove certain types of cognitive load that are not conducive to retention, making learning easier and faster in some cases where engagement can be kept high.

    It’s important to be clear and honest about what a study is saying, even if it’s not as unequivocally negative as the venue might appreciate.

    • Optional@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      arrow-down
      3
      ·
      7 days ago

      It’s important to be clear and honest about what a study is saying, even if it’s not as unequivocally negative as the venue might appreciate.

      Of course. If you’re talking about presenting nuance then I would just briefly mention the generation of studies that showed exposure to television reduced cognitive abilities and were full of nuance. Because all of those studies were ignored, and more showing television advertising had no effect on people (how did those studies get funded I wonder, well anyway) nothing happened and here we are in Libertarian paradise.

      AI is much more affecting and it’s adoption isn’t being “offered”, it’s being mandated. I think we can dispense with some of the nuance in headlines and leave that to the researchers looking at the raw data.

      • ricecake@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        7 days ago

        Nah, I don’t think we can. You may be okay with hyperbolic lies from an antivax quackery website, but I’m not.

        I think our use of LLMs is overblown and rife with issues, but I don’t think the answer to that is to wrap your concerns in so much obvious bullshit that anyone who does even a cursory glance will see that it’s bunk. All you do is convey “people who think LLMs and generative AI are worrisome are full of shit”.

        AI is much more affecting

        Gee, if only there were some way to find information that validates those claims and be confident that people haven’t labeled them grossly incorrectly…

        Why are you talking about TV, as an aside? People doing research poorly or ignoring research in the past is irrelevant to if we should lie to people now.

        • Optional@lemmy.worldOP
          link
          fedilink
          arrow-up
          2
          arrow-down
          2
          ·
          7 days ago

          Why are you talking about TV, as an aside? People doing research poorly or ignoring research in the past is irrelevant to if we should lie to people now.

          First of all, it’s as relevant as anything can be. Just say you don’t know anything about it. Secondly, who’s lying?

          • ricecake@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            ·
            6 days ago

            The article you linked to. Most people would call “saying inaccurate things” was a form of “lying”.

            Explain why it’s relevant. I get that you’re saying “they said TV was fine and it caused problems”. I don’t see how that’s relevant to “we should say things that aren’t true about AI”.