The main thing that AI has shown, is how much bullshit we subconsciously filter through every day without much effort. (Although clearly some people struggle a lot more with distinguishing between bullshit and fact, considering how much politicized nonsense has taken hold.)
If I were to google how to get gum out of my child’s hair and then be directed to that same reddit post. I’d read through it and be pretty sure which were jokes and which were serious; we make such distinctions, as you say, every day without much effort.
LLMs simply don’t have that ability. And the number of average people who just don’t get that is mind-boggling to me.
I also find it weirdly dystopian that, if you sum that up, it kind of makes it sound like in order for an LLM to make the next step towards A.I. It needs a sense of humour. It needs the ability to weed through when the information it’s digging from is serious, or just random jack-asses on the internet.
Which is turning it into a very very Star Trek problem.
The main thing that AI has shown, is how much bullshit we subconsciously filter through every day without much effort. (Although clearly some people struggle a lot more with distinguishing between bullshit and fact, considering how much politicized nonsense has taken hold.)
Exactly that.
If I were to google how to get gum out of my child’s hair and then be directed to that same reddit post. I’d read through it and be pretty sure which were jokes and which were serious; we make such distinctions, as you say, every day without much effort.
LLMs simply don’t have that ability. And the number of average people who just don’t get that is mind-boggling to me.
I also find it weirdly dystopian that, if you sum that up, it kind of makes it sound like in order for an LLM to make the next step towards A.I. It needs a sense of humour. It needs the ability to weed through when the information it’s digging from is serious, or just random jack-asses on the internet.
Which is turning it into a very very Star Trek problem.