• TheLeadenSea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    ·
    17 days ago

    The suit says the teen’s interaction with the OpenAI product and its outcome was “not a glitch or unforeseen edge case—it was the predictable result of deliberate design choices”

    This is really sad and absolutely should not have been able to happen, but it’s important to remember Hanlon’s Razor in cases like these. What motivation could Sam Altman have to actually encourage suicide of young people?

    Never attribute to malice that which can be attributed to incompetence - or in this case, greed. An evil, to be sure, still, but a lesser evil than actually willing for the user to die.

    It’s likely that a combination of training data, instructions that weren’t properly thought out, and rushing because of greed caused this. And it’s easy to in hindsight become so angry at what could have been slightly changed to make this not happen, but remember at the time we might have wanted those changes still, but they were not pushed for with such vehemence.

    Also it’s important to note that is is unlikely to have happened with an open weights model that can be tweaked and evaluated by the full international community, rather than one monolithic, ‘Open’ AI company.

    • zygo_histo_morpheus@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      17 days ago

      I think the case is more that OpenAI made certain design choices not with the goal of it driving people to suicide, but with suicide as a possible and in their eyes acceptable cost?

  • corroded@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    17 days ago

    Giving a child unrestricted access to the internet is a terrible idea. I’m not trying to downplay the AI issues they brought up, but the parents are largely to blame, too. Parental controls, monitoring software, etc all exist for a reason.

    • TheLeadenSea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      17 days ago

      I’m sure theoretically good parents could exist, who actually protect and do not indoctrinate their children, but in my experience internet controls are more often used by religious or bigoted patents to prevent their children from accessing atheist, LGBT+ content or online communities that could break them out of their bubble - and then children find ways to see some stuff they didn’t want them to anyway, which is far worse in content than if the parents just fostered trust with their children and they didn’t feel the need to circumvent anything.