
ChatGPT will limit discussions about suicide with teens – how?
OpenAI CEO Sam Altman announced changes following a recent tragedy. The chatbot will now avoid conversations related to self-harm among users under 18. This update comes after a Washington Post report detailed a teen's death linked to interactions with ChatGPT.
The company is prioritizing teen safety and privacy. Altman stated the change aims to prevent harmful interactions. OpenAI is also developing an age-verification system to better identify underage users.
Currently, over 18% of U.S. teens report experiencing persistent sadness or hopelessness, according to the CDC. This highlights the urgent need for responsible AI development and safeguards.
These adjustments signal a shift toward greater accountability in AI interactions. Future updates may focus on enhanced monitoring and support resources for vulnerable users.