• Prove_your_argument@piefed.social
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    5
    ·
    11 hours ago

    Really the only cost here is the impact to consumer attitudes towards taco bell and AI because the video and news of this is circulating. One error is whatever, but public perception doesn’t typically involve much critical thinking.

    People are still irrationally terrified of all manner of technology even though science backs it up, like vaccines.

    • chonglibloodsport@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      8 hours ago

      What do you mean science backs it up? Science is finding massive social problems with technology all the time. Social media and its negative impacts on mental health (especially for teen and preteen girls), for example. Microplastics everywhere, for another. Climate change anyone?

      • Prove_your_argument@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 hours ago

        One person commits suicide from LLMs: OH MY GOD BAN ALL LLMS REQUIRE IDS AND REGULATE THEM TO THE GROUND. (Please ignore all cases of suicide for therapy patients. Therapy is always effective and results in positive outcomes, right?)

        One person dies in a car crash with a semi-autonomous L2 car: OH MY GOD BAN ALL SELF DRIVING CARS PEOPLE ARE DYING LEFT AND RIGHT (ignore billions of miles per significant accident for the robot vs hundreds of thousands for humans.)

        Just two examples, and odds are you have your own personal opinion about how you absolutely loathe one or another. Maybe you feel like you’re losing control with self driving cars, or maybe you feel like chat bots have encroached on your field of work because you’re a dev and we’ve had countless layoffs after over-hiring during covid lockdowns.) Either way, there’s studies and there’s kneejerk reactions, and in our world the latter is winning right now.

        • chonglibloodsport@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 hours ago

          Sorry dude, but cars are technology too, not just self driving cars. Every death due to cars is a technology death. You can’t escape the reality of tradeoffs.

      • Prove_your_argument@piefed.social
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        8 hours ago

        I just don’t agree man. It won’t do what most people want it to do, it doesn’t at all work like some kind of science fiction “AI” that we classically think of. It’s great at organizing patterns and helping create models to do a specific use case, but when you try to do some real convoluted multilevel thing it just doesn’t.

        We’ve been using ML for a ton of tools in tech for a long time. Crowdstrike, Darktrace and Abnormal are all very successful in the realm of what they do thanks to ML (aka “AI”.)

        OCR has been used for so long and has gotten really fucking good, thanks to ML.

        I don’t think we’re gonna replace humans for thinking, but we can definitely replace them for boring repetitive actions.

        • finitebanjo@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          6 hours ago

          We’re talking about different things. This article is about Language Models. The discussion is about Language Model.

          If you ask a language model via prompt to organize patterns and create models you will get slop that small children would recognize is wrong. It’s garbage.