Look, I don’t believe that an AGI is possible or atleast within the next few decade. But I was thinking about, if one came to be, how can we differentiate it from a Large Language Model (LLM) that has read every book ever written by humans?

Such an LLM would have the “knowledge” of almost every human emotions, morals, and can even infer from the past if the situations are slightly changed. Also such LLM would be backed by pretty powerful infrastructure, so hallucinations might be eliminated and can handle different context at a single time.

One might say, it also has to have emotions to be considered an AGI and that’s a valid one. But an LLM is capable of putting on a facade at-least in a conversation. So we might have to hard time reading if the emotions are genuine or just some texts churned out by some rules and algorithms.

In a pure TEXTUAL context, I feel it would be hard to tell them apart. What are your thoughts on this? BTW this is a shower-thought, so I might be wrong.

  • azimir@lemmy.ml
    link
    fedilink
    arrow-up
    4
    ·
    20 hours ago

    You’re sitting on the Chinese Translator problem, and to some extent the basis of the Turing Test (mostly the translator problem).

    https://en.wikipedia.org/wiki/Chinese_room

    Knowledge != Intelligence

    Regurgitating things you’ve read is only interpolative. You’re only able to reply with things you’ve seen before, never new things. Intelligence is extrapolative. You’re able to generate new ideas or material beyond what has been done before.

    So far, the LLM world remains interpolative, even if it reads everything created by others before it.

    • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      19 hours ago

      The Chinese room thought experiment is deeply idiotic, it’s frankly incredible to me that people discuss it seriously. Hofstadter does a great tear down of it in I Am a Strange Loop.