Facing five lawsuits alleging wrongful deaths, OpenAI lobbed its first defense Tuesday, denying in a court filing that ChatGPT caused a teen’s suicide and instead arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot.

The earliest look at OpenAI’s strategy to overcome the string of lawsuits came in a case where parents of 16-year-old Adam Raine accused OpenAI of relaxing safety guardrails that allowed ChatGPT to become the teen’s “suicide coach.” OpenAI deliberately designed the version their son used, ChatGPT 4o, to encourage and validate his suicidal ideation in its quest to build the world’s most engaging chatbot, parents argued.

But in a blog, OpenAI claimed that parents selectively chose disturbing chat logs while supposedly ignoring “the full picture” revealed by the teen’s chat history. Digging through the logs, OpenAI claimed the teen told ChatGPT that he’d begun experiencing suicidal ideation at age 11, long before he used the chatbot.

  • gian
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    58 minutes ago

    I would say that it is more like a software company putting in their TOS that you cannot use their software to do a specific thing(s).
    Would be correct to sue the software company because a user violated the TOS ?

    I agree that what happened is tragic and that the answer by OpenAI is beyond stupid but in the end they are suing the owner of a technology for a uses misuse of said technology. Or should we sue also Wikipedia because someone looked up how to hang himself ?

    That’s like a gun company claiming using their weapons for robbery is a violation of terms of service.

    The gun company can rightfully say that what you do with your property is not their problem.

    But let’s make a less controversial example: do you think you can sue a fishing rods company because I use one of their rods to whip you ?