Look, I don’t believe that an AGI is possible or atleast within the next few decade. But I was thinking about, if one came to be, how can we differentiate it from a Large Language Model (LLM) that has read every book ever written by humans?

Such an LLM would have the “knowledge” of almost every human emotions, morals, and can even infer from the past if the situations are slightly changed. Also such LLM would be backed by pretty powerful infrastructure, so hallucinations might be eliminated and can handle different context at a single time.

One might say, it also has to have emotions to be considered an AGI and that’s a valid one. But an LLM is capable of putting on a facade at-least in a conversation. So we might have to hard time reading if the emotions are genuine or just some texts churned out by some rules and algorithms.

In a pure TEXTUAL context, I feel it would be hard to tell them apart. What are your thoughts on this? BTW this is a shower-thought, so I might be wrong.

  • FriendOfDeSoto@startrek.website
    link
    fedilink
    English
    arrow-up
    1
    ·
    22 hours ago

    It remains to be seen if reading about all the emotions and morals is the same as feeling them, acting according to them, or just being able to identify them in us meatbags. So even with the sum total of human knowledge at their disposal, this may not matter. We already don’t know how these models actually organize their “knowledge.” We can feed them and we can correct bad outcomes. Beyond that it is a black box, at least right now. So if the spark from statistical spellchecker (current so-called AI) to actual AI (or AGI) happens, we’ll probably not even notice until it writes us a literary classic of Shakespearean proportions or until it turns us into fuel for its paperclip factory. That’s my assessment anyway.