I see a huge amount of confusion around terminology in discussions about Artificial Intelligence, so here’s my quick attempt to clear some of it up.
Artificial Intelligence is the broadest possible category. It includes everything from the chess opponent on the Atari to hypothetical superintelligent systems piloting spaceships in sci-fi. Both are forms of artificial intelligence - but drastically different.
That chess engine is an example of narrow AI: it may even be superhuman at chess, but it can’t do anything else. In contrast, the sci-fi systems like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, or GERTY are imagined as generally intelligent - that is, capable of performing a wide range of cognitive tasks across domains. This is called Artificial General Intelligence (AGI).
One common misconception I keep running into is the claim that Large Language Models (LLMs) like ChatGPT are “not AI” or “not intelligent.” That’s simply false. The issue here is mostly about mismatched expectations. LLMs are not generally intelligent - but they are a form of narrow AI. They’re trained to do one thing very well: generate natural-sounding text based on patterns in language. And they do that with remarkable fluency.
What they’re not designed to do is give factual answers. That it often seems like they do is a side effect - a reflection of how much factual information was present in their training data. But fundamentally, they’re not knowledge databases - they’re statistical pattern machines trained to continue a given prompt with plausible text.
AGI itself has been made up as a marketing term by LLM companies.
Let’s not forget that the official definition of AGI is that it can make 200 billion dollars.
The term AGI was first used in 1997 by Mark Avrum Gubrud in an article named ‘Nanotechnology and international security’
That is true, but it was a term narrowly and incoherently used by scientists. In fact, that one paper used it, and it took ten years for it to be picked up again, again by just a few academic papers. Even the academic community preferred terms like “strong AI” before the current hype.
AGI was not a term that was used to refer to a concept, it had to be explained by each and every article that mentioned it, it was not a general term that had a strict meaning attached to it. It was brought to that level by Google/Deepmind employees two years ago, and then got into the place where every second Medium article is buzzwording around with it when it became a corporate target for OpenAI/Microsoft.