• cerulean_blue@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    8 months ago

    Why? We all know LLMs are just copy and paste of what other people have said online…if it answers “yes” or “no”, it hasn’t formulated an opinion on the matter and isn’t propaganda, it’s just parroting whatever it’s been trained on, which could be anything and is guaranteed to upset someone with either answer.

    • TheObviousSolution@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      8 months ago

      which could be anything and is guaranteed to upset someone with either answer.

      Funny how it only matters with certain answers.

      The reason “Why” is because it should become clear that the topic itself is actively censored, which is the possibility the original comment wanted to discard. But I can’t force people to see what they don’t want to.

      it’s just parroting whatever it’s been trained on

      If that’s your take on training LLMs, then I hope you aren’t involved in training them. A lot more effort goes into doing so, including being able to make sure it isn’t just “parroting” it. Another thing entirely is to have post-processing that removes answers about particular topics, which is what’s happening here.

      Not even being able to answer whether Gaza exists is being so lazy that it becomes dystopian. There are plenty of ways LLM can handle controversial topics, and in fact, Google Gemini’s LLM does as well, it just was censored before it could get the chance to do so and subsequently refined. This is why other LLMs will win over Google’s, because Google doesn’t put in the effort. Good thing other LLMs don’t adopt your approach on things.