When I was in college, expert systems were considered AI. Expert systems can be 100% programmed by a human. As long as they’re making decisions that appear intelligent, they’re AI.
One example of an expert system “AI” is called “game AI.” If a bot in a game appears to be acting similar to a real human, that’s considered AI. Or at least it was when I went to college.
AI is kind of like Scotsmen. It’s hard to find a true one, and every time you think you have, the goalposts get moved.
Now, AI is hard, both to make and to define. As for what is sometimes called AGI (artificial general intelligence), I don’t think we’ve come close at this point.
I see the no true Scotsman fallacy as something that doesn’t affect technical experts, for the most part. Like, an anthropologist would probably go with the simplest definition of birthplace, or perhaps go as far to use heritage. But they wouldn’t get stuck on the complicated reasoning in the fallacy.
Similarly, for AI experts, AI is not hard to find. We’ve had AI of one sort or another since the 1950s, I think. You might have it in some of your home appliances.
When talking about human level intelligence from an inanimate object, the history is much longer. Thousands of years. To me, it’s more a question for philosophers than for engineers. The same questions we’re asking about AI, philosophers have asked about humans. And just about every time people say modern AI is lacking in some trait compared to humans, you can find a history of philosophers asking whether humans really exhibit that trait in the first place.
I guess neuroscience is also looking into this question. But the point is, once they can explain exactly why human minds are special, we engineers won’t get stuck on the Scotsman fallacy, because we’ll be too busy copying that behavior into a computer. And then the non-experts will get to have fun inventing another reason that human intelligence is special.
Because that’s the real truth behind Scotsman, isn’t it? The person has already decided on the answer, and will never admit defeat.
And yet, look in the comments and you will see people literally saying the examples you gave from the 50s aren’t true AI. Granted, those aren’t technical experts.
Even I wouldn’t call myself a technical expert in AI. I studied it in both my bachelor’s and master’s degrees and worked professionally with types of AI, such as decision trees, for years. And I did a little professionally to help data scientists develop NN models, but we’re talking in the range of weeks or maybe months.
It’s really neural networks where I’ve not had enough experience. I never really developed NN models myself, other than small ones in my personal time, so I’m no expert, but I’ve studied it enough and been around it enough that I can talk intelligently about the topic with experts… or at least I could the last time I worked with it, which was around 5 years ago.
And that’s why it’s so depressing to look at these comments you’re talking about. People who vastly oversell their expertise and spread misinformation because it fits their agenda. I also think we need to protect people from generative AI, but I’m not willing to ignore facts or lie to do so.
They are AI, but to be fair, it’s an extraordinarily broad field. Even the venerable A* Pathfinding algorithm technically counts as AI.
When I was in college, expert systems were considered AI. Expert systems can be 100% programmed by a human. As long as they’re making decisions that appear intelligent, they’re AI.
One example of an expert system “AI” is called “game AI.” If a bot in a game appears to be acting similar to a real human, that’s considered AI. Or at least it was when I went to college.
AI is kind of like Scotsmen. It’s hard to find a true one, and every time you think you have, the goalposts get moved.
Now, AI is hard, both to make and to define. As for what is sometimes called AGI (artificial general intelligence), I don’t think we’ve come close at this point.
I see the no true Scotsman fallacy as something that doesn’t affect technical experts, for the most part. Like, an anthropologist would probably go with the simplest definition of birthplace, or perhaps go as far to use heritage. But they wouldn’t get stuck on the complicated reasoning in the fallacy.
Similarly, for AI experts, AI is not hard to find. We’ve had AI of one sort or another since the 1950s, I think. You might have it in some of your home appliances.
When talking about human level intelligence from an inanimate object, the history is much longer. Thousands of years. To me, it’s more a question for philosophers than for engineers. The same questions we’re asking about AI, philosophers have asked about humans. And just about every time people say modern AI is lacking in some trait compared to humans, you can find a history of philosophers asking whether humans really exhibit that trait in the first place.
I guess neuroscience is also looking into this question. But the point is, once they can explain exactly why human minds are special, we engineers won’t get stuck on the Scotsman fallacy, because we’ll be too busy copying that behavior into a computer. And then the non-experts will get to have fun inventing another reason that human intelligence is special.
Because that’s the real truth behind Scotsman, isn’t it? The person has already decided on the answer, and will never admit defeat.
And yet, look in the comments and you will see people literally saying the examples you gave from the 50s aren’t true AI. Granted, those aren’t technical experts.
Even I wouldn’t call myself a technical expert in AI. I studied it in both my bachelor’s and master’s degrees and worked professionally with types of AI, such as decision trees, for years. And I did a little professionally to help data scientists develop NN models, but we’re talking in the range of weeks or maybe months.
It’s really neural networks where I’ve not had enough experience. I never really developed NN models myself, other than small ones in my personal time, so I’m no expert, but I’ve studied it enough and been around it enough that I can talk intelligently about the topic with experts… or at least I could the last time I worked with it, which was around 5 years ago.
And that’s why it’s so depressing to look at these comments you’re talking about. People who vastly oversell their expertise and spread misinformation because it fits their agenda. I also think we need to protect people from generative AI, but I’m not willing to ignore facts or lie to do so.