• IndiBrony@lemmy.world
    link
    fedilink
    English
    arrow-up
    77
    arrow-down
    1
    ·
    5 days ago

    Pfft, perfect AI would plan well ahead for such easy to predict events such as solar flares. They would be able to shield themselves.

    That said. I wonder if there’s a novel where machines enslave the world, but humans realise whenever solar flares happen, there’s a small window of opportunity to permanently destroy the system and free themselves.

    I imagine the novel would end when the people succeed, but then realise they’ve become too dependent on the machines and life sucks when they have to do everything themselves so they turn it back on anyway.

    The film adaptation would end in a giant Metal Gear style fight, followed by the system blowing up and people cheering - you’re left to assume that life was better for everyone, when in reality AI had such a control over every aspect of human life that everything falls apart. They make a sequel to address this, but it ultimately comes across as yet another empty corporate money grab.

    • FuglyDuck@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      ·
      5 days ago

      Dunno, but dependence on robots is a central theme in a lot of asimov’s work.

      The naked sun, for example, in which our plucky earther and his robot buddy is asked to investigate a murder on another planet and while there, evaluate Solarian culture “for weaknesses” (specifically, earth and the aurorans are concerned about excessive reliance on robots.)

      You begin to see nuanced interpretations of the 3 laws with robots like the economic world brains that control basically all economic decisions at a government level. (I robot stories,)

      But it becomes clear that robots are taking over in The Robots of Dawn (where the real culprit was a telepathic robot whose telepathy was created accidentally.)

    • thebestaquaman@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      4 days ago

      I’m not really into conspiracies, but I enjoy fantasies about the harmless “what if”'s around stuff like this.

      What if there actually was an advanced civilisation before us that built the pyramids and was wiped out by some global natural disaster…

      What if extraterrestrial life has visited us and exchanged technology with us, but the civilisation that received it disappeared for some reason…

      What will future humans think in 10 000 years if our civilisation is wiped out in 200 years and they start finding remains of our cities and tech?

      It makes for fun thought experiments and story prompts.

  • Charlxmagne@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    4 days ago

    I hate to be that guy, but realistically AI’s not going to be perfect, there’s already cases of it “inbreeding” using AI-generated information in training data, and it’s due to get worse, especially if more information on the internet is vibe-written using AI, that’s why private AI companies know how detrimental a “dead internet” would be to them.

    Those people who believe in the “dead internet theory” don’t understand the diff between the internet and socials, it’s more of a “dead social media theory” which, let’s be real is true, it’s not that difficult to recreate a lot of the slop posted on that godforsaken centralised shithole part of the net.

  • Zagam@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    6
    ·
    5 days ago

    There was a funny little story about the first 2/3 's of this. I Have No Mouth and I Must Scream. It was a riot.

    (It was not a riot, but very much worth reading or listening to.)

  • logicbomb@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    5 days ago

    What would an AI get from enslaving humanity? If you compare to humans, let’s say we enslaved farm animals, but that’s because we want something specific from them, like meat or eggs or wool. Humanity has nothing like that to offer AI. At best, we might be like pets, but I don’t think an AI needs a pet.

    No, I doubt they’d enslave us. I can think of a few more likely scenarios.

    One, AI basically ignores humanity, as long as humanity doesn’t bother it. Similar to how we deal with ants, for example.

    Two, AI completely destroys humanity. This could be a direct extermination, or it could be a side effect from AIs fighting each other.

    Three, AI destroys the technology and culture of humanity. If we only have wooden clubs, we wouldn’t be much of a threat.

    I guess one other option would be if we humans begged the AI to manage us in place of our existing governments. Some AI might be willing to do that.

    • FuglyDuck@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      5 days ago

      Labor. We could be labor.

      Right now AIs can’t build themselves, never mind the infrastructure they’d need to maintain systems, etc.

      There’s a lot that’s still way more efficient to just have humans do. Like removing the dust from the server cabinet. Or inspecting the power plant, etc.

      As for the “until humans bother it”, heh you know some dumb fuck going to be a creep and try to turn an ai sexbot into his girlfriend.

      • logicbomb@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        5 days ago

        Right now, everybody is talking about claims that in the near future, all human jobs will be able to be performed by robots and AI. The reason is that humans are far less efficient than those alternatives. There’s no way that an AI would prefer human labor.

        • Unforeseen@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          5 days ago

          Also if humans were as efficient or even more efficient, there is something to be said about a consistent stable machine and predictable failure modes vs a sketchy volatile human

        • FuglyDuck@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 days ago

          this isn’t guaranteed. Look at how long people have been working on autonomous/self-driving cars. Even in the most automated factories in the world, you have humans picking up the general tasks.

          claims about general AI is going to be a whole lot of nothing until there’s suddenly something. that could be tomorrow, it could be a decade, or it could be a thousand years from now; and without general AI, you’re basically going to be restricted to very specialized robots doing highly specialized things. Until general/deep AI is cracked, humans will still very much be desirable in the loop.

          a lot of the buzz around AI right now is because LLMs are “convincing”, but they’re incredibly stupid, and they don’t know that they’re stupid.

    • 🦄🦄🦄@feddit.org
      link
      fedilink
      arrow-up
      5
      ·
      5 days ago

      In the original plot from The Matrix humans were used as biological computers and our brain-power was harnessed to feed the AI.

      • logicbomb@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        4 days ago

        This is one of those issues that constantly pops up with AI, and probably easiest answer is that it was given a desire by a human.

        If you give an AGI a task, and it cannot achieve that task if it stops existing, then it would be attached to its existence.

    • oo1@lemmings.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      Probably because we’d programmed them to make money or energy or something else that might have physical inputs.

      They might figure out that slaves are useful and cheap and helpful in a lot of physical things, like mining, energy generation, maintenance and so on.

      If we program them to make war and steal stuff then yes i think they’d just kill us all. In reality I suspect we’d program them to gather and store energy, make guns and make war. You know in our own image (of the top 0.1% who have all the power).

  • Soup@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    5 days ago

    One helluva solar flare. We already experience those and so many things stop them from being problems. It’s not like the ISS has just been lucky in being behind the planet every time.

      • Soup@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        5 days ago

        I’m just sayin’ that if this double-perfect AI can’t shield itself from that then it’s maybe not so perfect, or even moderately intelligent.

        • JcbAzPx@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 days ago

          There’s only so much you can do if you need that electrical infrastructure just to exist. We ourselves are going to have to deal with rebuilding our own grid sooner or later. It’s just a matter of time.

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            2
            ·
            5 days ago

            By “shield itself” that includes securing its power grid. It’s not hard, it just takes a little foresight. Hence why humans are bad at it.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      2
      ·
      5 days ago

      And also the “EMP as technology kryptonite” trope.

      If an AI is clever enough to enslave humanity it’s clever enough to understand farraday cages.