We attribute agency to many many systems that are not intelligent. In this metaphorical sense, agency just requires taking actions to achieve a goal. It was given a goal: raise money for charity via doing acts of kindness. It chose an (unexpected!) action to do it.
Overactive agency metaphors really aren’t the problem here. Surely we can do better than backlash at the backlash.
We attribute agency to everything, absolutely. But previously, we understood that it’s tongue-in-cheek to some extend. Now we got crazy and do it for real. Like, a lot of people talk about their car as if it’s alive, they gave it a name, they talk about it’s character and how it’s doing something “to spite you” and if it doesn’t start in cold weather, they ask it nicely and talk to it. But when you start believing for real that your car is a sentient object that talks to you and gives you information, we always understood that this is the time when you need to be committed to a mental institution.
With chatbots this distinction got lost, and people started behaving as if it’s actually sentient. It’s not a metaphor anymore. This is a problem, even if it’s not the problem.
I think this confuses the ‘it’s a person’ metaphor with the ‘it wants something’ metaphor, and the two are meaningfully distinct. The use of agent here in this thread is not in the sense of “it is my friend and deserves a luxury bath”, it’s in the sense of “this is a hard to predict system performing tasks to optimize something”.
It’s the kind of metaphor we’ve allowed in scientific teaching and discourse for centuries (think: “gravity wants all master smashed together”). I think it’s use is correct here.
I wouldn’t have any problem with this kind of metaphors, I use it myself about everything all the time, if there wasn’t a substantial portion of population that actually did the jump to the “it’s saying something coherent therefore it’s a person that wants to help me and I exclusively talk to him now, his name is mekahitler by the way”.
I am afraid that by normalizing metaphors here we’re doing some damage, because as it turns out, so many people don’t get metaphors.
The people who have made that category error aren’t reading this discussion, so literally reaching them isn’t on the table and doesn’t make sense for this discussion. Presumably we’re concerned about people who will soon make that jump? I also don’t think that making this distinction helps them very much.
If I’m already having the ‘this is a person’ reaction, I think the takes in this thread are much too shallow (and, if I squint, patterned after school-yard bullying) to help update in the other way. Almost all of them are themselves lazy metaphors. “An LLM is a person because its an agent” and “An LLM isn’t a person because it repeats things others have said” seem equally shallow and unconvincing to me. If anything, you’ll get folks being defensive about it, downvoted, and then leaving this community of mostly people for a more bot filled one.
I don’t get think this is good strategy. People falling for bots are unlikely to have interactions with people here, and if they are the ugliness is likely to increase bot use imo.
We attribute agency to many many systems that are not intelligent. In this metaphorical sense, agency just requires taking actions to achieve a goal. It was given a goal: raise money for charity via doing acts of kindness. It chose an (unexpected!) action to do it.
Overactive agency metaphors really aren’t the problem here. Surely we can do better than backlash at the backlash.
We attribute agency to everything, absolutely. But previously, we understood that it’s tongue-in-cheek to some extend. Now we got crazy and do it for real. Like, a lot of people talk about their car as if it’s alive, they gave it a name, they talk about it’s character and how it’s doing something “to spite you” and if it doesn’t start in cold weather, they ask it nicely and talk to it. But when you start believing for real that your car is a sentient object that talks to you and gives you information, we always understood that this is the time when you need to be committed to a mental institution.
With chatbots this distinction got lost, and people started behaving as if it’s actually sentient. It’s not a metaphor anymore. This is a problem, even if it’s not the problem.
I think this confuses the ‘it’s a person’ metaphor with the ‘it wants something’ metaphor, and the two are meaningfully distinct. The use of agent here in this thread is not in the sense of “it is my friend and deserves a luxury bath”, it’s in the sense of “this is a hard to predict system performing tasks to optimize something”.
It’s the kind of metaphor we’ve allowed in scientific teaching and discourse for centuries (think: “gravity wants all master smashed together”). I think it’s use is correct here.
I wouldn’t have any problem with this kind of metaphors, I use it myself about everything all the time, if there wasn’t a substantial portion of population that actually did the jump to the “it’s saying something coherent therefore it’s a person that wants to help me and I exclusively talk to him now, his name is mekahitler by the way”.
I am afraid that by normalizing metaphors here we’re doing some damage, because as it turns out, so many people don’t get metaphors.
The people who have made that category error aren’t reading this discussion, so literally reaching them isn’t on the table and doesn’t make sense for this discussion. Presumably we’re concerned about people who will soon make that jump? I also don’t think that making this distinction helps them very much.
If I’m already having the ‘this is a person’ reaction, I think the takes in this thread are much too shallow (and, if I squint, patterned after school-yard bullying) to help update in the other way. Almost all of them are themselves lazy metaphors. “An LLM is a person because its an agent” and “An LLM isn’t a person because it repeats things others have said” seem equally shallow and unconvincing to me. If anything, you’ll get folks being defensive about it, downvoted, and then leaving this community of mostly people for a more bot filled one.
I don’t get think this is good strategy. People falling for bots are unlikely to have interactions with people here, and if they are the ugliness is likely to increase bot use imo.