• LEM 1689@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        7
        ·
        13 hours ago

        A show called The Starlost.

        The Starlost is a Canadian-produced science fiction television series created by writer Harlan Ellison and broadcast in 1973 on CTV in Canada and syndicated to local stations in the United States. The show’s setting is a huge generational colony spacecraft called Earthship Ark, which following an unspecified accident has gone off course.

        https://en.m.wikipedia.org/wiki/The_Starlost

  • xxce2AAb@feddit.dk
    link
    fedilink
    arrow-up
    15
    ·
    19 hours ago

    It’s like reading an article about a petrol refining company, who, having prior experience with gasoline as a useful and profitable substance, decides to seek venture capital for the development of a petrol-based fire-extinguisher. They obtain the funding - presumably because some people with money just wants to see the world burn and / or because being rich and having brains is not necessarily strongly correlated - but after having developed the product, tests conclusively prove the project’s early detractors right: The result is surprisingly always more fire, not less. And they “don’t know how to fix it, while still adhering to the vision of a petrol-based fire-extinguisher”.

    • Scubus@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      7 hours ago

      Nah, you could definitely make one. Ensure the petrol is completely aerosolized, so that it burns completely and quickly. Now it just needs to be able to burn oxygen out of a room faster than it can get in. Or it could use the burning petrol to generate coumpounds and co2 to suffocate the fire. Get yourself basically a petrol powered weedeater and replace the rope with some sort of heat dissipater. As it spin its shoots the heat elsewhere, somewhere safer.

      • xxce2AAb@feddit.dk
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        5 hours ago

        Those are some interesting and creative suggestions. Now, I’m no weapons engineer, but I believe there’s a term for aerosolized gasoline when deployed to put out a fire, and that term is “thermobaric bomb”.

        Never mind that though, it’ll totally work: Not only is a building that no longer exists not a building on fire, but it’s guaranteed to never catch fire again. Problem permanently solved. If you’re in the market for a job, I’ve been told that Hellfire (“We may not put you out, but we’ll definitely put you down”) Inc. is hiring.

    • SpaceCowboy@lemmy.ca
      link
      fedilink
      arrow-up
      4
      ·
      14 hours ago

      The “fight fire with fire” marketing campaign is getting a lot of engagement so we’re releasing the product anyway.

  • PattyMcB@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    18 hours ago

    As much as I despise all the hype around AI, it’s that hype that’s probably leading vulnerable people to these ends

  • jaykrown@lemmy.world
    link
    fedilink
    arrow-up
    18
    arrow-down
    8
    ·
    19 hours ago

    I wonder if it has something to do with this:

    “users who turn to popular chatbots when exhibiting signs of severe crises risk

    Blaming the chatbot doesn’t seem like the smartest perspective, the title is fucking bullshit.

    • Nougat@fedia.io
      link
      fedilink
      arrow-up
      15
      ·
      18 hours ago

      There’s a lot of questionable things that people in crisis turn to. Intoxicants, religion, c/tenforward, fascism.

    • Steve@communick.news
      link
      fedilink
      English
      arrow-up
      5
      ·
      18 hours ago

      The title makes it sound like it’s all people.
      A better one might be “ChatGPT is failing to help people in crises, and many are dying”

  • womjunru@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    23
    ·
    20 hours ago

    So, it’s not ChatGPT, it’s all LLMs. And the people who go through this are using AI wrong. You cannot blame the tool because some people prompt it to make them crazy.

    • pageflight@lemmy.world
      link
      fedilink
      English
      arrow-up
      36
      arrow-down
      1
      ·
      20 hours ago

      But you can blame the overly eager way it has been made available without guidance, restriction, or regulation. The same discussion applies to social media, or tobacco, or fossil fuels: the companies didn’t make anyone use it for self destruction, but they also didn’t take responsibility.

      • Asswardbackaddict@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        18 hours ago

        First nuanced argument I’ve seen on this topic. Great point. Just like bottle manufacturers started the litter bug campaign. I think the problem with llm’s has to do with profit-motive as well - specifically broad data sets with conflicting shit, like the water bunny next to general relativity made for broad appeal. AI gets a lot more useful when you curate it for a specific purpose. Like, I dunno. Trying influence elections or check consistency between themes.

      • womjunru@lemmy.cafe
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        17
        ·
        20 hours ago

        Kitchen knife manufacturers, razor blades, self-help books, Helter Skelter, the list of things that people can “use wrong” is endless.

        PEBCAK

              • womjunru@lemmy.cafe
                link
                fedilink
                English
                arrow-up
                5
                arrow-down
                4
                ·
                20 hours ago

                I never used the word epidemic. I don’t believe the article use the word epidemic.

                If we want to talk about things that are more damaging to People, let’s talk about social media. That is exponentially more damaging than AI.

        • svitvojimilioni@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          20 hours ago

          Kitchen knife, razor blades are a different category, for self-help books also. LLM is completely different category and there is no point of comparing knife to an llm besides to do a relativization.

          • womjunru@lemmy.cafe
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            6
            ·
            20 hours ago

            The tools are relative. Pick a tool. It can be used wrong. You are special pleading, dogmatism, intellectual dishonesty.

            If you’re going to refuse entire categories of tools then we are down to comparing AI to AI, which is a pointless conversation and I want no part of it.

            • svitvojimilioni@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              17 hours ago

              If you’re going to refuse entire categories of tools then we are down to comparing AI to AI, which is a pointless conversation and I want no part of it.

              The point is not to compare but analyze how AI affects us and the world around us, society. By saying “it’s just a tool”, or “knives can also be missuesd” you relativize discussion and that rethoric just contributes to defending openAI and other big techs and even helping them banalize the issue.

              From what i witnessed is that people lose agency, get and belive fake info, everything becoming slop, people loosing jobs getting replace by more workers that are less payed etc.

              EDIT: And no it’s not the same as knife or razor or a gun, it will never be.

              • womjunru@lemmy.cafe
                link
                fedilink
                English
                arrow-up
                2
                ·
                16 hours ago

                You could say the same about social media and the entire internet. Would you choose to regulate that?

                I recall in the mid 90s a group of people on the street corner protesting AOL (America OnLine) and saying the internet should be stopped.

                They may have had a point, but the technology wasn’t to blame for the shit that’s it’s used for.

                The vague way you talk about AI makes be think that you don’t know much about it. what do you use AI for? Is it ChatGPT?

            • Benedict_Espinosa@lemmy.world
              link
              fedilink
              arrow-up
              3
              ·
              19 hours ago

              It’s not about banning or refusing AI tools, it’s about making them as safe as possible and regulating their usage.

              Your argument is the equivalent of “guns don’t kill people” or blaming drivers for Tesla’s so-called “full self-driving” errors leading to accidents, because “full self-driving” switches itself off right before the accident, leaving the driver responsible as the one who should have paid more attention, even if there was no time left for him to react.

              • womjunru@lemmy.cafe
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                1
                ·
                19 hours ago

                So what kind of regulations would be put in place to prevent people from using ai to feed their mania?

                I’m open to the idea, but I think it’s such a broad concept at this point that implementation and regulation would be impossible.

                If you want to go down the guns don’t kill people assumption, fine: social media kills more people and does more damage and should be shut down long before AI. 🤷‍♂️

                • Benedict_Espinosa@lemmy.world
                  link
                  fedilink
                  arrow-up
                  3
                  ·
                  edit-2
                  19 hours ago

                  Probably the same kind of guardrails that they already have - teaching LLMs to recognise patterns of potentially harmful behaviour. There’s nothing impossible in that. Shutting LLMs down altogether is a straw man and extreme example fallacy, when the discussion is about regulation and guardrails.

                  Discussing the damage LLMs do does not, of course, in any way negate the damage that social media does. These are two different conversations. In the case of social media there’s probably government regulation needed, as it’s clear by now that the companies won’t regulate themselves.

        • CarbonIceDragon@pawb.social
          link
          fedilink
          arrow-up
          5
          ·
          20 hours ago

          It isnt exactly unheard of for regulations to be placed in the design, sale, or labeling of stuff because of misuse, to be fair. Even assuming the fault of using a tool wrong is with the user, assigning blame does not actually do anything about the problem. If enough people consistently misuse a thing in a certain way, there can be a general social benefit to trying to stop that type of misuse even if the people misusing it “are the problem”, and since those people clearly arent going to just start using the thing properly just because someone pointed the finger of blame at them, addressing the problem is likely to take some kind of design or systemic change to make it more difficult for them to use that tool in that way.

      • womjunru@lemmy.cafe
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        5
        ·
        19 hours ago

        It will give you whatever you want. Just like social media and google searches. The information exists.

        When you tell it to give you information in a way you want to hear it, you’re just talking to yourself at that point.

        People should probably avoid giving it prompts that fuel their mania. However, since mania is totally subjective and the topics range broadly, what does an actual regulation look like?

        What do you use AI for?

        • FartMaster69@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          15 hours ago

          Yeah, because someone in a manic state can definitely be trusted to carefully analyze their prompts to minimize bias.

          What do you use AI for?

          I don’t.

          • womjunru@lemmy.cafe
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            14 hours ago

            So… you have no clue at all what any of this is. Got it. I’ll bet you blame video games for violence.

              • womjunru@lemmy.cafe
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                3
                ·
                14 hours ago

                Tell me more about how you’ve never ever used it and that everything you’re saying is influenced by the media and other anti-ai user comments.

                Let’s see what happens when I google for UFOs or chemtrails or deep state or anti-vaccine, etc. how much user created delusional content will I find?

                • FartMaster69@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  7 hours ago

                  Lmao, I never said I’ve never touched AI I just don’t use it for anything because it doesn’t do anything useful for me.

                  Yes, delusional content on Google is also a problem. But you have to understand how delusional content tailored uniquely to the individual fed to them by a sycophantic chatbot is several times more dangerous.