• glimse@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    4 days ago

    Not defending its use here but the summaries are probably pretty accurate most of the time. LLMs are good at finding patterns and summarizing.

    Wonder what it says about my profile

      • glimse@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 days ago

        Undeniably yes? They aren’t perfect by any means but pattern recognition is how they work lol

        That’s why I said probably pretty accurate

        • 0_o7@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          4
          ·
          4 days ago

          The point is the pattern they are trained is more susceptible to bias than them not being good at pattern recognition.

          This is a US corporation with the government very fond of violence against PoC and genocide. The system may be showing one thing to mods, and sending reports of antisemitism to palantir for saying “stop Gaza genocide”.

          There’s also partnership with Google, so they can probably match even more “patterns” like your location, apps, contacts and social media if you’re using an Android smartphone. Apple just loves sucking up to authoritarian governments, so they got all things covered.

          Just stop using Reddit and US based social media for anything political or don’t use them at all.

    • loonsun@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      4 days ago

      They however have many issues with doing so, the most important here being the need for prior knowledge to understand context. Semantic meaning in text based on the principles of the distributional hypothesis that underlies embeddings has a flaw in that it doesn’t consider context not directly in the text to ascribe meaning.

      For example if I wrote everyday on reddit “God I love pizza” you wouldn’t be able to tell without prior context if I’m an Italian food fiend or I have a cat named Pizza.

      While LLMs are good patern recognition machines they don’t inherently create valid correlations and can’t understand causation.