• Perspectivist@feddit.uk
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    13 hours ago

    The path to AGI seems inevitable - not because it’s around the corner, but because of the nature of technological progress itself. Unless one of two things stops us, we’ll get there eventually:

    1. Either there’s something fundamentally unique about how the biological brain processes information - something that cannot, even in principle, be replicated in silicon,

    2. Or we wipe ourselves out before we get the chance.

    Barring those, the outcome is just a matter of time. This argument makes no claim about timelines - only trajectory. Even if we stopped AI research for a thousand years, it’s hard to imagine a future where we wouldn’t eventually resume it. That’s what humans do; improve our technology.

    The article points to cloning as a counterexample but that’s not a technological dead end, that’s a moral boundary. If one thinks we’ll hold that line forever, I’d call that naïve. When it comes to AGI, there’s no moral firewall strong enough to hold back the drive toward it. Not permanently.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      8 hours ago

      something that cannot, even in principle, be replicated in silicon

      As if silicon were the only technology we have to build computers.

      • Perspectivist@feddit.uk
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 hours ago

        Did you genuinely not understand the point I was making, or are you just being pedantic? “Silicon” obviously refers to current computing substrates, not a literal constraint on all future hardware. If you’d prefer I rewrite it as “in non-biological substrates,” I’m happy to oblige - but I have a feeling you already knew that.

          • Perspectivist@feddit.uk
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 hours ago

            I haven’t claimed that it is. The point is, the only two plausible scenarios I can think of where we don’t eventually reach AGI are: either we destroy ourselves before we get there, or there’s something fundamentally mysterious about the biological computer that is the human brain - something that allows it to process information in a way we simply can’t replicate any other way.

            I don’t think that’s the case, since both the brain and computers are made of matter, and matter obeys the laws of physics. But it’s at least conceivable that there could be more to it.

            • rottingleaf@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              5 hours ago

              I personally think that the additional component (suppose it’s energy) that modern approaches miss is the sheer amount of entropy a human brain gets - plenty of many times duplicated sensory signals with pseudo-random fluctuations. I don’t know how one can use lots of entropy to replace lots of computation (OK, I know what Monte-Carlo method is, just how it applies to AI), but superficially this seems to be the way that will be taken at some point.

              On your point - I agree.

              I’d say we might reach AGI soon enough, but it will be impractical to use as compared to a human.

              While the matching efficiency is something very far away, because a human brain has undergone, so to say, an optimization\compression taking the energy of evolution since the beginning of life on Earth.