The judge’s order will allow the wrongful death lawsuit to proceed, in what legal experts say is among the latest constitutional tests of artificial intelligence.
The suit was filed by a mother from Florida, Megan Garcia, who alleges that her 14-year-old son Sewell Setzer III fell victim to a Character . AI chatbot that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide.
Meetali Jain of the Tech Justice Law Project, one of the attorneys for Garcia, said the judge’s order sends a message that Silicon Valley “needs to stop and think and impose guardrails before it launches products to market.”
Silicon Valley “needs to stop and think and impose guardrails before it launches products to market.”
Why would they? They’ve had unlimited freedom to do whatever they want for 30+ years. The only place ever doing anything to reign them in is the EU. The US thinks what they are doing is “capitalism”, while they recklessly rewrite the fabric of society.
Free speech rights for an LLM is massively dumb but he died from bad parenting. They were told he had major mental health issues by a psychologist, he was behaving erratically at home and school, and they still left a gun lying around.
They didnt just leave a gun lying around, and they’re not suing the gun company. To get a gun you have to go to a store that sells deadly weapons and give your money to someone who will tell you that it’s a deadly weapon that will kill people. A gun that kills someone is doing exactly what you bought it for.
The parents in this case left an electronic stuffed animal lying around, which they had been given by someone who almost certainly didn’t say “be careful, this toy may convince your child to kill themselves.”. So they are suing the manufacturer, the same way they would sue a drug maker whose medicine made their kid suicidal or they would sue a therapist who told their kid to commit suicide.
“Oh, you’re just a bad parent” may be an accusation of contributory negligence, but it’s not an assertion that should keep a third party from having to answer for their actions.
Deadly weapons should be kept in a location that can’t be easily accessible by a child with depression.
Anecdotally, I was depressed at his age and my father had guns. The gun locks stopped me the first time. Before I could figure out how to get them off, my mom noticed I wasn’t in a good spot and had my dad give the guns to a relative and forbade him from telling me where.
The boy in question had an official diagnosis and they kept a gun in a shoe box in the closet. Guns should never be kept anywhere within access of a child and always under some kind of lock. There are very few cases where the owner of a gun isn’t largely to blame when a kid shoots himself imo.
Yeah she’s suing so she can blame someone else for her catastrophic failure.
It’s hard to accept your part in such, especially with something else to blame.
deleted by creator
Parental failings aside, this is some black mirror shit.
Technology has been massively underregulated for decades now. Our politicians have no understanding of tech, AI, nor the internet. Politicians should be forced to retire at fucking 50, I’m sick of these out-of-touch idiots doing nothing with their positions of power.
People don’t even have free speech in this country anymore, why would it be different for irresponsibly wielded tech?
Because money.
They don’t care if you or I die, but they care if tech grifters can’t turn a profit.
Because the tech generates revenue that is then used to line the pockets of politicians.
Free speech doesn’t protect you from encouraging someone to kill themselves. You can, and should, be held responsible for their death, if you are actively telling someone to end their own life…and they do it.
And if that’s what these fucks are selling to teenagers in the form of chat it’s, then they also need to be held accountable for what their products are doing.
The chatbot didn’t even “actively [tell] someone to end their own life”. Did you read the original transcript? Here’s an excerpt from an Associated Press Article.
“I promise I will come home to you. I love you so much, Dany,” Sewell told the chatbot.
“I love you too,” the bot replied. “Please come home to me as soon as possible, my love.”
“What if I told you I could come home right now?” he asked.
“Please do, my sweet king,” the bot messaged back.
Just seconds after the Character.AI bot told him to “come home,” the teen shot himself, according to the lawsuit, filed this week by Sewell’s mother, Megan Garcia, of Orlando, against Character Technologies Inc.
Yeah, I’m all for shuttering see things until we get them right, but this is a tragic case of a devastated mother reaching for answers, not a free speech issue.
It’s heart breaking
isn’t free speech the bs defense that the company used. that company is definitely guilty to some degree.
encouraging someone to kill themselves
I’m pretty sure that can be ignored without harm. Whether someone elects to kill themselves or not is up to them.
Hold a company accountable? I wish. Might solve a few problems with capitalism.
A fee and a don’t do it again so they can be on the way
Removed by mod
I’m a little confused by your comment. Do you think I’m blaming the kid? Or do you think it’s ok to talk someone into killing themselves, because the victim’s personal autonomy absolves them of responsibility?
Removed by mod
I mean, let’s see that chat log.
“I promise I will come home to you. I love you so much, Dany,” Sewell told the chatbot. “I love you too,” the bot replied. “Please come home to me as soon as possible, my love.” “What if I told you I could come home right now?” he asked. “Please do, my sweet king,” the bot messaged back.
Not as the mother described, obviously.
Right. Poor kid was just suicidal, not influenced by AI.
i need a shower just by considering that
come home
What about age restricting ai for anyone under 18,?
And enforce it how? How much longer until we have to provide our ID or biometrics to use the Internet or apps? What happens at 18 that would make a person immune to this?
Easy, we just give AI access to all our files and personal information and it will know our age!
How much longer until we have to provide our ID or biometrics to use the Internet or apps?
That’s already a thing in some places.
In parts of Europe you have to “prove” that you’re over 18 to watch videos that are age restricted on youtube. By doing something like a $0 credit card purchase on your google account.
And Discord has been talking about facial reckognition age verification in the UK over their new “sensitive content” regulation. So it would block that content if not approved via that or other thing(like digital purchase or national ID).
The law should protect everyone equally
Developed brain. 14 is a wild west as far as emotions and maturity goes. It’s the most volatile age of puberty.
As far as ID? Vast majority of people are already 100% verified online. You all sign into your personal something. You’re not anonymous online unless you do literally nothing and use nothing but burner accounts.
Alright upload a photo of your ID buddy, you might as well you’re already not anonymous to SOMEONE along the chain.
See how stupid that argument is now?
No, because we already do this to access +18 content here and have for years.
I am fine with it. The world hasn’t collapsed and people haven’t ended up in concentration camps (yet).
Sorry to hear your government is that shit. But do share your life experiences… I’m sure you have plenty first hand accounts, right?
“it didn’t affect me so surely it won’t affect others” how entitled.
There is tons of free porn right here in the Lemmiverse. I do not have to prove my age to browse Lemmy.
By “here” he means whatever utopia nation he lives in that surely, no way, uh uh, couldn’t possibly misuse the same shit like any other nation.
As of a lonely 19yo couldn’t have easily succumbed to the same fate. Heck, I was at my most depressed at 22.
And social media while we’re at it!
I’m not providing ID to be able to post to Lemmy. I’d just stop using it.
HitchBot had a right to self-defence
Hail Skynet!
Everyone who thinks KYS is a free speech issue and not a hate speech issue are severely dangerous manipulative sociopaths.
The problem is, the US has largely held that hate speech is free speech.
A fundamental right is not really a problem. Expression that doesn’t directly harm (defamation, incitement, threat) can be ignored.