I’m just a meat computer running fucked-up software written by the process of evolution. I honestly don’t know how sentient Grok or any modern AI system is and I’d wager you don’t either.
How sentient? Like on a scale of zero to sentience? None. It is non-sentient, it is a promptable autocomplete that offers best predicted sentences. Left to itself it does nothing, has no motivations, intentions, “will”, desire to survive/feed/duplicate etc. A houseplant has a higher sentience score.
An LLM is only one part of a complete AI agent. What exactly happens in a processer at inference time? What happens when you continuously prompt the system with stimuli?
If you believe that AI is “conscious” while it’s processing prompts, and also believe that we shouldn’t kill machine life, then AI companies are commiting genocide at an unprecedented scale.
For example, each AI model would be equivalent to a person taught everything in the training data. Any time you want something from them, instead of asking directly, you make a clone of them, let it respond to the input, then murder it.
That is how all generative AI works. Sounds pretty unethical to me.
And, by the way, we do know exactly what happens inside processors when they’re running, that’s how processors are designed. Running AI doesn’t magically change the laws of physics.
I’m not saying I believe they’re conscious, all I said was that I don’t know and neither do you.
Of course we know what’s happening in processors. We know what’s happening in neuronal matter too. What we don’t know is how consciousness or sentience emerges from large networks of neurons.
all I said was that I don’t know and neither do you.
Because there is not any evidence whatsoever that there is consciousness associated with LLMs. We have ample evidence that consciousness is associated with many forms of biological life.
I’m not even aware of a scholarly theory suggesting there might be consciousness associated with LLMs. Now, I’m not an LLM expert, but neither are you (hurr durr) and so I think if you are going to suggest that maybe consciousness exists there, it should be based off something other than “hey man you never know” which is pretty much what it feels like. (Or you should be unsurprised when folks find that assertion unconvincing.)
Honestly, I’m not surprised. I obviously didn’t phrase my argument in a compelling way.
I disagree that we don’t have evidence for conciousness in LLMs. They have been showing behavior previously attributed only to highly intelligent, sentient creatures, i.e. us. To me it seems very plausible that when you have a large network of neurons, be they artificial or biological, with specialized circuits for processing specific stimuli that some sort of sentience could emerge.
If you want academic research on this you just have to take a look. Researchers have been discussing this topic for decades. There isn’t a working theory of machine sentience simply because we don’t have one that works for natural systems. But that obviously doesn’t rule it out. After all, why should sentience be constrained to squishy matter? In any case, I think we can all agree something very interesting is going on with LLMs.
by their very nature, they are not sentient. They are Markov chains for words. They do not have a sense of self, truth, or feel emotions, they do not have wants or desires, they merely predict what is the next most likely word in a sequence, given the context. The only thing they can do is “make plausible sentences that can come after [the context]”.
That’s all an LLM is. It doesn’t reason. I’m more than happy to entertain the notion of rights for a computer that actually has the ability to think and feel, but this ain’t it.
Not that I agree they’re conscious, but this is an incorrect and overly simplistic definition of a LLM. They are probabilistic in nature, yea, and they work on tokens, or fragments, of words. But it’s about as much of an oversimplification to say humans are just markov chains that make plausible sentences that can come after [the context] as it is to say modern GPTs are.
I’m just a meat computer running fucked-up software written by the process of evolution. I honestly don’t know how sentient Grok or any modern AI system is and I’d wager you don’t either.
I do know. It’s not sentient at all. But don’t get angry at me about this. You can put that all on science.
How sentient? Like on a scale of zero to sentience? None. It is non-sentient, it is a promptable autocomplete that offers best predicted sentences. Left to itself it does nothing, has no motivations, intentions, “will”, desire to survive/feed/duplicate etc. A houseplant has a higher sentience score.
An LLM is only one part of a complete AI agent. What exactly happens in a processer at inference time? What happens when you continuously prompt the system with stimuli?
If you believe that AI is “conscious” while it’s processing prompts, and also believe that we shouldn’t kill machine life, then AI companies are commiting genocide at an unprecedented scale.
For example, each AI model would be equivalent to a person taught everything in the training data. Any time you want something from them, instead of asking directly, you make a clone of them, let it respond to the input, then murder it.
That is how all generative AI works. Sounds pretty unethical to me.
And, by the way, we do know exactly what happens inside processors when they’re running, that’s how processors are designed. Running AI doesn’t magically change the laws of physics.
People taught AI to speak like a middle manager and thinks this means the AI is sentient, instead of proving that middle managers aren’t
I’m not saying I believe they’re conscious, all I said was that I don’t know and neither do you.
Of course we know what’s happening in processors. We know what’s happening in neuronal matter too. What we don’t know is how consciousness or sentience emerges from large networks of neurons.
But they’re saying they do know. And they are correct.
I know I’m the smartest man on earth. And I’m correct.
See how crazy that sounds? Just because someone is confident about something doesn’t make it true.
I think you know exactly how empirically provable facts work. And I also think you’re a troll.
Please apply that to this:
Because there is not any evidence whatsoever that there is consciousness associated with LLMs. We have ample evidence that consciousness is associated with many forms of biological life.
I’m not even aware of a scholarly theory suggesting there might be consciousness associated with LLMs. Now, I’m not an LLM expert, but neither are you (hurr durr) and so I think if you are going to suggest that maybe consciousness exists there, it should be based off something other than “hey man you never know” which is pretty much what it feels like. (Or you should be unsurprised when folks find that assertion unconvincing.)
Honestly, I’m not surprised. I obviously didn’t phrase my argument in a compelling way.
I disagree that we don’t have evidence for conciousness in LLMs. They have been showing behavior previously attributed only to highly intelligent, sentient creatures, i.e. us. To me it seems very plausible that when you have a large network of neurons, be they artificial or biological, with specialized circuits for processing specific stimuli that some sort of sentience could emerge.
If you want academic research on this you just have to take a look. Researchers have been discussing this topic for decades. There isn’t a working theory of machine sentience simply because we don’t have one that works for natural systems. But that obviously doesn’t rule it out. After all, why should sentience be constrained to squishy matter? In any case, I think we can all agree something very interesting is going on with LLMs.
My god dude, you need look up how these things work.
by their very nature, they are not sentient. They are Markov chains for words. They do not have a sense of self, truth, or feel emotions, they do not have wants or desires, they merely predict what is the next most likely word in a sequence, given the context. The only thing they can do is “make plausible sentences that can come after [the context]”.
That’s all an LLM is. It doesn’t reason. I’m more than happy to entertain the notion of rights for a computer that actually has the ability to think and feel, but this ain’t it.
Not that I agree they’re conscious, but this is an incorrect and overly simplistic definition of a LLM. They are probabilistic in nature, yea, and they work on tokens, or fragments, of words. But it’s about as much of an oversimplification to say humans are just markov chains that make plausible sentences that can come after [the context] as it is to say modern GPTs are.
I could believe that you are on the level of an LLM but that doesn’t mean you can generalize that to humans.