Finding is one of most direct statements from the tech company on how AI can exacerbate mental health issues

More than a million ChatGPT users each week send messages that include “explicit indicators of potential suicidal planning or intent”, according to a blogpost published by OpenAI on Monday. The finding, part of an update on how the chatbot handles sensitive conversations, is one of the most direct statements from the artificial intelligence giant on the scale of how AI can exacerbate mental health issues.

In addition to its estimates on suicidal ideations and related interactions, OpenAI also said that about 0.07 of users active in a given week – about 560,000 of its touted 800m weekly users – show “possible signs of mental health emergencies related to psychosis or mania”. The post cautioned that these conversations were difficult to detect or measure, and that this was an initial analysis.

  • slaneesh_is_right@lemmy.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    I get the creeps when i see how many women on hinge “get advice” from chat gpt. “Chat gpt said i’m smart”, “chat gpt is my babe”, if chat gpt doesn’t like you, neither will i". It’s straight up cyber psychosis. Why not ask a furby that always agrees with you?

    • NoWay@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 day ago

      Those first generation furby’s logic was outstanding. So good they dialed it back when they remade them. They were creepy as hell and a legit security concern.