Meta Platforms is launching a new safety feature on Instagram that will alert parents when their teenagers repeatedly search for terms related to self-harm or suicide, in what is described as a bold move to protect adolescents online.
The new feature, part of Instagram's existing parental supervision tools, will send immediate alerts to parents via email, text, or WhatsApp if the system detects repeated searches for sensitive keywords within a short period.
Meta is also developing a similar system to monitor teenagers' conversations with the company's AI tools, alerting parents if the discussions turn to dangerous topics.
The feature will also provide parents with a guide developed by mental health experts on how to initiate calm and supportive conversations with their teenagers without making them feel like their privacy is being violated.
The feature will launch next week in the United States, the United Kingdom, Canada, and Australia. Meta has stated that the feature will gradually roll out to the rest of the world, including the Middle East, later this year.
According to observers, this move comes amid increasing legal pressure on Meta in California and New Mexico courts, where the company is accused of failing to protect minors from algorithms that may promote content harmful to mental health.
Meta CEO Mark Zuckerberg stated in a previous post that "Our goal is to empower parents to intervene in a timely manner and provide support before search attempts turn into actions on the ground."