I don’t actually have a problem with this. If people are stupid enough to admit to a crime or engage in criminal activity on a platform that they don’t control, that’s on them. I put this as the next step of evolution from people who would commit a crime on youtube for views then get shocked pikachu’d when the police arrest them for it. They have no one to blame but themselves, they brought a 3rd party AI company into it and they did not consent to be an accomplice and if there is any company out there with the resources to have AI scan conversations to flags to send to the police with good accuracy, openAi would definitely be at the front of it.
Well, you should have a problem with it but not for the reasons you think. Any invasion of privacy is an issue when the people in control get to decide what is a reportable offense without explicitly telling you. I get it, you definitely shouldn’t be admitting anything illegal or asking illegal advice from a chat bot. You shouldn’t be doing anything that is illegal in the first place.That’s basically the same as googling how to make a bomb and if you’re that dumb you’ll get what’s coming to you. The issue arrives when you look at the bigger picture. If they have the ability to report anything they want to the police, what’s stopping them from releasing anything they want to anyone they want at any time? And when it comes to those receiving the data that’s been reported, what proof do you have that these entities have yours or anyone else’s interests or safety in mind? What if they decide to change the rules on what they should report, they don’t tell you, and then retroactively flag a bunch of your conversions with said LLM.
It’s the same kinda situation that we face with these AI cameras that track us and our vehicles literally everywhere we go. There have already been multiple cases where people in law enforcement were using these tools to stalk people like ex girlfriends. All this is putting a lot of trust into people that none of us even know and expect them to have the best of intentions. What would stop them from reporting that you asked ChatGPT about the current situation in Gaza?
One thing I think we all miss: what happens when an overzealous government makes something a crime retroactively? Say, um, disparaging two Cheetos in an ill fitting suit masquerading as a world leader.
That’s part of why we should care about privacy and why we should care when data we expect to be private isn’t.
Most tech users are victims in a system they don’t understand. We might complain that they don’t want to understand but the truth is the providers don’t want them to understand - as it’s easier to sell them whatever crap they’re shilling.
I kinda agree. While I do want these llm companies to be more private, in terms of data retention, I think it’s native to say that a company which is selling artificial intelligence to hundreds of millions of users should be totally ambivalent in the face of llm induced psychosis and suicide. Especially when the technology only gets more hazardous as it becomes more capable.
I don’t actually have a problem with this. If people are stupid enough to admit to a crime or engage in criminal activity on a platform that they don’t control, that’s on them. I put this as the next step of evolution from people who would commit a crime on youtube for views then get shocked pikachu’d when the police arrest them for it. They have no one to blame but themselves, they brought a 3rd party AI company into it and they did not consent to be an accomplice and if there is any company out there with the resources to have AI scan conversations to flags to send to the police with good accuracy, openAi would definitely be at the front of it.
Ahh, the ol’ ‘nothing to hide’ defense.
Ever consider things that are labeled as ‘crimes’ can and will be anything the people in power want?
Just because, say, calling Republicans ‘shithead pedophiles’ on Lemmy isn’t illegal now doesn’t mean Cheeto Mussolini won’t make it illegal tomorrow.
You’re fine with invasion of privacy as long as it only affects criminals.
I think you’ll find that once privacy is broken you’d be surprised how many people end up under that umbrella.
Using the fucking GPT is the privacy invasion.
So yes, once the company has the logs and detects any criminal or dangerous activity, it should report it.
Stop using chatbots in the first place.
Can we have it affect the oligarchs and authoritarian fascists, too?
Bro wants to comply ahead of time. lol You’re a weird little fool
Well, you should have a problem with it but not for the reasons you think. Any invasion of privacy is an issue when the people in control get to decide what is a reportable offense without explicitly telling you. I get it, you definitely shouldn’t be admitting anything illegal or asking illegal advice from a chat bot. You shouldn’t be doing anything that is illegal in the first place.That’s basically the same as googling how to make a bomb and if you’re that dumb you’ll get what’s coming to you. The issue arrives when you look at the bigger picture. If they have the ability to report anything they want to the police, what’s stopping them from releasing anything they want to anyone they want at any time? And when it comes to those receiving the data that’s been reported, what proof do you have that these entities have yours or anyone else’s interests or safety in mind? What if they decide to change the rules on what they should report, they don’t tell you, and then retroactively flag a bunch of your conversions with said LLM.
It’s the same kinda situation that we face with these AI cameras that track us and our vehicles literally everywhere we go. There have already been multiple cases where people in law enforcement were using these tools to stalk people like ex girlfriends. All this is putting a lot of trust into people that none of us even know and expect them to have the best of intentions. What would stop them from reporting that you asked ChatGPT about the current situation in Gaza?
Fair points.
One thing I think we all miss: what happens when an overzealous government makes something a crime retroactively? Say, um, disparaging two Cheetos in an ill fitting suit masquerading as a world leader.
That’s part of why we should care about privacy and why we should care when data we expect to be private isn’t.
Most tech users are victims in a system they don’t understand. We might complain that they don’t want to understand but the truth is the providers don’t want them to understand - as it’s easier to sell them whatever crap they’re shilling.
Being criminally stupid when planning crimes is pretty stupid.
I kinda agree. While I do want these llm companies to be more private, in terms of data retention, I think it’s native to say that a company which is selling artificial intelligence to hundreds of millions of users should be totally ambivalent in the face of llm induced psychosis and suicide. Especially when the technology only gets more hazardous as it becomes more capable.