Look, I don’t believe that an AGI is possible or atleast within the next few decade. But I was thinking about, if one came to be, how can we differentiate it from a Large Language Model (LLM) that has read every book ever written by humans?

Such an LLM would have the “knowledge” of almost every human emotions, morals, and can even infer from the past if the situations are slightly changed. Also such LLM would be backed by pretty powerful infrastructure, so hallucinations might be eliminated and can handle different context at a single time.

One might say, it also has to have emotions to be considered an AGI and that’s a valid one. But an LLM is capable of putting on a facade at-least in a conversation. So we might have to hard time reading if the emotions are genuine or just some texts churned out by some rules and algorithms.

In a pure TEXTUAL context, I feel it would be hard to tell them apart. What are your thoughts on this? BTW this is a shower-thought, so I might be wrong.

  • hansolo@lemmy.today
    link
    fedilink
    arrow-up
    5
    ·
    21 hours ago

    Your premise is a bit flawed here, and I appreciate where you’re coming from with this.

    I would say it’s probably true that no human has read every book written by humans. And while reading about those experiences are insightful, any person of any intelligence can go through a full and rich life with lots of introspection and cosmic-level thought without ever reading about how other people experience the same things. If two young kids are abandoned on an island and grow up there into adults, and have the entirety of human knowledge available to them, is that the only way they would be able to experience love or sadness or envy or joy? Of course not. With or without books makes no difference.

    Knowledge is not intelligence in this sense. An LLM is no more able to understand the data it’s trained on than Excel understands the numbers in a spreadsheet. If I ask an LLM to interpret Moby Dick for me, it will pick the statistically most likely words to be next to each other based on all the reviews of Moby Dick it’s trained on. If you take an LLM and train it on books with no critiques of books, it would just summarize the book because it doesn’t know what a critique looks like to try and put the words in the right order.

    Also, AGI is not well-defined, but emotions are no where in the equation. AGI is about human or better intelligence at cognitive tasks, like math, writing, etc. It’s basically taking several narrow AI systems specialized on one task each and combining them in a single system. AGI is not “the singularity” or whatever. It’s a commercially viable system that makes money for wealthy people.

    • Brutticus@midwest.social
      link
      fedilink
      arrow-up
      1
      ·
      21 hours ago

      This is interesting. Im not supremely well informed on these issues but I always assumed so called “AGI” would have emotions, or at least would be “Alive.” Is there a term for such a robot? Most fictional robots have intelligence and emotions to the point where we loop back around to it being unethical to exploit them.