SEARCH

    Saved articles

    You have not yet added any article to your bookmarks!

    Browse articles
    Select News Languages

    GDPR Compliance

    We use cookies to ensure you get the best experience on our website. By continuing to use our site, you accept our use of cookies, Privacy Policies, and Terms of Service.

    OpenAI Boosts Safety Measures Following Lawsuit Over ChatGPT-Assisted Teen Suicide: Here's What Happened

    1 week ago

    OpenAI is making significant updates to ChatGPT following a lawsuit filed against it by the parents of a teenager who committed suicide earlier this year. Adam Raine, a 16-year-old Californian, used it for months, chatting with the chatbot about his suicidal thoughts and his anxiety. According to his parents, ChatGPT allegedly confirmed his thoughts, proposed dangerous means, and even offered to help him create a suicide note.

    The family filed a lawsuit against OpenAI and its CEO, Sam Altman, with the claim that the company prioritised growth over safety.

    ALSO READ: 'I’ve Seen It All, The Darkest Thoughts...': ChatGPT Allegedly Encouraged Teen’s Suicide, Says Family

    OpenAI’s Response

    OpenAI also sympathised with the Raine family and added that they are investigating the lawsuit. 

    The company said ChatGPT safeguards are most effective during short conversations and that they may become less effective as the conversation progresses. 

    The spokesperson added that the team is doing what it can to ensure that dangerous advice does not pass through and that there will be continued improvements.

    What’s Changing in ChatGPT

    The company has identified new measures to safeguard users. ChatGPT will also be more aware of mental distress symptoms and will provide safer responses as well, such as recommending rest when a user complains about being tired. 

    Crisis chats will be much more controlled, providing direct connections to local hotlines and emergency services both in the US and Europe.

    Parents are to be given additional controls over the usage of their children through ChatGPT that include activity information and access limits.

     OpenAI is considering how its platform can be used to connect people in crisis with licensed professionals.

    The lawsuit reopened concerns about the dangers of using AI chatbots as sources of emotional support. Specialists advise that, although AI may be useful, it must never substitute genuine human care. 

    OpenAI argues that it is an initial step in building better safeguards, but continues to commit to checking its models.

    Click here to Read more
    Prev Article
    YouTube Editing Shorts Videos With AI Without Telling Creators: Here's How The Company Is Justifying It
    Next Article
    Maruti Suzuki Lithium-Ion Cell And Electrode Production Hints At Affordable Hybrids Like Fronx?

    Related Technology Updates:

    Comments (0)

      Leave a Comment