• 𝕛𝕨𝕞-𝕕𝕖𝕧@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    4
    ·
    12 hours ago

    chess engines are, and always have been called, AI. computer vision is and always has been AI.

    the only reason you might think they’re not is because in the most recent AI winter in which those technologies experienced a boom they avoided terminology like “AI” when requesting funding and advertising their work because people like you who had recently decided that they’re the arbiters of what is and isn’t intelligence.

    turing once said if we were to gather the meaning of intelligence from a gallup poll it would be patently absurd, and i agree.

    but sure, computer vision and chess engines, the two most prominent use cases for AI and ML technologies - aren’t actual artificial intelligence, because you said so. why? idk. i guess because we can do those things well and the moment we understand something well as a society people start getting offended if you call it intelligence rather than computation. can’t break the “i’m a special and unique snowflake” spell for people, god forbid…

    • hedgehog@ttrpg.network
      link
      fedilink
      arrow-up
      3
      ·
      8 hours ago

      There’s a whole history of people, both inside and outside the field, shifting the definition of AI to exclude any problem that had been the focus of AI research as soon as it’s solved.

      Bertram Raphael said “AI is a collective name for problems which we do not yet know how to solve properly by computer.”

      Pamela McCorduck wrote “it’s part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, but that’s not thinking” (Page 204 in Machines Who Think).

      In Gödel, Escher, Bach: An Eternal Golden Braid, Douglas Hofstadter named “AI is whatever hasn’t been done yet” Tesler’s Theorem (crediting Larry Tesler).

      https://praxtime.com/2016/06/09/agi-means-talking-computers/ reiterates the “AI is anything we don’t yet understand” point, but also touches on one reason why LLMs are still considered AI - because in fiction, talking computers were AI.

      The author also quotes Jeff Hawkins’ book On Intelligence:

      Now we can see the entire picture. Nature first created animals such as reptiles with sophisticated senses and sophisticated but relatively rigid behaviors. It then discovered that by adding a memory system and feeding the sensory stream into it, the animal could remember past experiences. When the animal found itself in the same or a similar situation, the memory would be recalled, leading to a prediction of what was likely to happen next. Thus, intelligence and understanding started as a memory system that fed predictions into the sensory stream. These predictions are the essence of understanding. To know something means that you can make predictions about it. …

      The human cortex is particularly large and therefore has a massive memory capacity. It is constantly predicting what you will see, hear, and feel, mostly in ways you are unconscious of. These predictions are our thoughts, and, when combined with sensory input, they are our perceptions. I call this view of the brain the memory-prediction framework of intelligence.

      If Searle’s Chinese Room contained a similar memory system that could make predictions about what Chinese characters would appear next and what would happen next in the story, we could say with confidence that the room understood Chinese and understood the story. We can now see where Alan Turing went wrong. Prediction, not behavior, is the proof of intelligence.

      Another reason why LLMs are still considered AI, in my opinion, is that we still don’t understand how they work - and by that, I of course mean that LLMs have emergent capabilities that we don’t understand, not that we don’t understand how the technology itself works.