I see a huge amount of confusion around terminology in discussions about Artificial Intelligence, so here’s my quick attempt to clear some of it up.
Artificial Intelligence is the broadest possible category. It includes everything from the chess opponent on the Atari to hypothetical superintelligent systems piloting spaceships in sci-fi. Both are forms of artificial intelligence - but drastically different.
That chess engine is an example of narrow AI: it may even be superhuman at chess, but it can’t do anything else. In contrast, the sci-fi systems like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, or GERTY are imagined as generally intelligent - that is, capable of performing a wide range of cognitive tasks across domains. This is called Artificial General Intelligence (AGI).
One common misconception I keep running into is the claim that Large Language Models (LLMs) like ChatGPT are “not AI” or “not intelligent.” That’s simply false. The issue here is mostly about mismatched expectations. LLMs are not generally intelligent - but they are a form of narrow AI. They’re trained to do one thing very well: generate natural-sounding text based on patterns in language. And they do that with remarkable fluency.
What they’re not designed to do is give factual answers. That it often seems like they do is a side effect - a reflection of how much factual information was present in their training data. But fundamentally, they’re not knowledge databases - they’re statistical pattern machines trained to continue a given prompt with plausible text.
So… not intelligent. In the sense that when someone without enough knowledge of computers and/or LLMs hears “LLM is intelligent” and sees “an LLM tells me X”, they will be likely to believe that X is true, and not without a reason. Exactly this is my main reason against all the use of intelligence-related terms. When spoken by knowledgeable people who do know the difference - yeah, I am all for that. But first we need to cut the crap of advertisement and hype
“Intelligent” is itself a highly unspecific term which covers quite a lot of different things.
What you’re think is “reasoning” or “rationalizing”, and LLMs can’t do that at all.
However what LLMs (and most Machine Learning implementations) can do is “pattern matching” which is also an element of intelligence: it’s what gives us and most animals the ability to recognize things such as food or predators without actually thinking about it (you just see, say, a cat, and you know without thinking that it’s a cat even though cats don’t all look the same), plus in humans it’s also what’s behind intuition.
PS: Way back since when they were invented over 3 decades ago, Neural Networks and other Machine Learning technologies were already very good at finding patterns in their training data - often better than humans.
The evolution of the technology has added to it the capability of creating content which follows those patterns, giving us things like LLMs or image generation.
However what has been made clear by LLMs is that using patterns alone (plus a little randomness to vary the results) in generating textual content is not enough to create useful content beyond entertainment, and that’s exactly because LLMs can’t rationalize. However, the original pattern matching stuff without the content generation is still widely used and very successfully so, in things from OCR to image recognition.
But they are intelligent - just not in the way people tend to think.
There’s nothing inherently wrong with avoiding certain terminology, but I’d caution against deliberately using incorrect terms, because that only opens the door to more confusion. It might help when explaining something one-on-one in private, but in an online discussion with a broad audience, you should be precise with your choice of words. Otherwise, you end up with what looks like disagreement, when in reality it’s just people talking past each other - using the same terms but with completely different interpretations.
Doesn’t that just degenerate into a debate over semantics though? Ie what is “intelligence”.
Not having a go, this is a good thread, and useful I think 👍
Yes, and that has always been the debate
But the short answer is that we don’t really have a good grasp at what intelligence is, so it is all semantics in the end
They ain’t intelligent
Great point, thank you:)