• rasbora@lemm.ee
    link
    fedilink
    English
    arrow-up
    16
    ·
    15 hours ago

    Yeah, from the article:

    Even sycophancy itself has been a problem in AI for “a long time,” says Nate Sharadin, a fellow at the Center for AI Safety, since the human feedback used to fine-tune AI’s responses can encourage answers that prioritize matching a user’s beliefs instead of facts. What’s likely happening with those experiencing ecstatic visions through ChatGPT and other models, he speculates, “is that people with existing tendencies toward experiencing various psychological issues,” including what might be recognized as grandiose delusions in clinical sense, “now have an always-on, human-level conversational partner with whom to co-experience their delusions.”

    • A_norny_mousse@feddit.org
      link
      fedilink
      English
      arrow-up
      16
      ·
      13 hours ago

      So it’s essentially the same mechanism with which conspiracy nuts embolden each other, to the point that they completely disconnect from reality?

      • rasbora@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        ·
        12 hours ago

        That was my take away as well. With the added bonus of having your echo chamber tailor made for you, and all the agreeing voices tuned in to your personality and saying exactly what you need to hear to maximize the effect.

        It’s eery. A propaganda machine operating on maximum efficiency. Goebbels would be jealous.