• frog 🐸@beehaw.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    5 months ago

    There may not have been any intentional design, but humans are still meant to eat food, drink water, and breathe oxygen, and going against that won’t lead to a good end.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      4
      ·
      5 months ago

      Even with that, being absolutist about this sort of thing is wrong. People undergoing surgery have spent time on heart/lung machines that breathe for them. People sometimes fast for good reasons, or get IV fluids or nutrients provided to them. You don’t see protestors outside of hospitals decrying how humans aren’t meant to be kept alive with such things, though, at least not in most cases (as always there are exceptions, the Terri Schiavo case for example).

      If I want to create an AI substitute for myself it is not anyone’s right to tell me I can’t because they don’t think I was meant to do that.

      • frog 🐸@beehaw.org
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 months ago

        Sure, you should be free to make one. But when you die and an AI company contacts all your grieving friends and family to offer them access to an AI based on you (for a low, low fee!), there are valid questions about whether that will cause them harm rather than help - and grieving people do not always make the most rational decisions. They can very easily be convinced that interacting with AI-you would be good for them, but it actually prolongs their grief and makes them feel worse. Grieving people are vulnerable, and I don’t think AI companies should be free to prey on the vulnerable, which is a very, very realistic outcome of this technology. Because that is what companies do.

        So I think you need to ask yourself not whether you should have the right to make an AI version of yourself for those who survive your death… but whether you’re comfortable with the very likely outcome that an abusive company will use their memories of you to exploit their grief and prolong their suffering. Do you want to do that to people you care about?

        • Zaktor@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          5 months ago

          This is speculation of corporate action completely divorced from the specifics of this technology and particulars of this story. The result of this could be a simple purchase either of hardware or software to be used as chosen by the person owning it. And the person commissioning it can specify exactly who such a simulacrum is presented to. None of this has to be under the power of the company that builds the simulacrums, and if it is structured that way, then that’s the problem that should be rejected or disallowed, not that this particular form of memento exists.

          • intensely_human@lemm.ee
            link
            fedilink
            arrow-up
            2
            ·
            5 months ago

            It could still be a bad idea even if the profit motive isn’t involved.

            One might be trying to help with the big surprise stash of heroin they leave to their widow, and she might embrace it fully, but that doesn’t make it a good idea or good for her.

            • Zaktor@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 months ago

              Sure, and that point is being made in multiple other places in these comments. I find it patronizing, but that’s neither here nor there as it’s not what this comment thread is about.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          1
          ·
          5 months ago

          But when you die and an AI company contacts all your grieving friends and family to offer them access to an AI based on you (for a low, low fee!)

          You can stop right there, you’re just imagining a scenario that suits your prejudices. Of all the applications for AI that I can imagine that would be better served by a model that is entirely under my control this would be the top of the list.

          With that out of the way the rest of your rhetorical questions are moot.