OpenAI Enhances ChatGPT Safety After Tragic Incidents
OpenAI is bolstering ChatGPT's safety features in the wake of recent tragedies. The company is partnering with medical professionals to develop new measures that safeguard young users and manage sensitive conversations more effectively.
A significant change is the automatic rerouting of sensitive conversations to 'Reasoning Models' like GPT-5, which are more resilient to manipulative prompts and consistently adhere to safety guidelines. OpenAI is collaborating with over 90 medical professionals, including psychiatrists and pediatricians, to implement these features.
New parental control functions will be available within the next month. These will enable parents to link their accounts with those of their children aged 13 and over, allowing for age-appropriate model behavior rules, disabling certain functions, and sending notifications in case of acute psychological distress. The update will also include app-integrated prompts encouraging users to take breaks.
In response to recent tragedies, such as the suicides of a 16-year-old Californian and another involving a man and his mother, OpenAI is expanding its support for users in psychological crises. The new features, to be rolled out in the next 120 days, will focus on strengthening protections for young people. Currently, OpenAI only refers users to hotlines if they express suicidal thoughts and does not automatically notify authorities due to data protection reasons.
OpenAI's new safety features aim to better handle sensitive moments and protect young users. The changes, including automatic rerouting to 'Reasoning Models' and new parental controls, are expected to enhance ChatGPT's response to psychological distress. These updates follow tragic incidents and are part of a broader effort to improve the platform's safety.