Instagram will soon alert some parents if their teen repeatedly searches for terms related to suicide or self-harm within a short period of time. Instagram owner Meta said on Thursday It is launching new notifications to parents in the US, UK, Australia and Canada, with other regions to follow later this year.
Thursday’s update follows the 2024 introduction of teen accounts. This applies to parents who use the platform’s optional parental control settings, which requires permission from teenagers and their parents to participate. Parental controls allow parents to see the accounts their teen follows, set a time limit on how long their teen can use the app and more.
If teens repeatedly make suicide-related searches, parents will receive alerts via in-app notifications and email, text or WhatsApp, depending on available contact information. In addition to notifying parents of their teen’s search, the notifications will allow them to “see expert resources designed to help them approach potentially sensitive conversations with their teen,” according to Meta.
An example of alerts.
Meta said it will trigger an alert if a teenager tries to search for phrases that promote suicide or self-harm and phrases that suggest a teenager wants to harm themselves. The company said its policy is to block searches for self-harm and suicide content and direct people to resources and helplines.
This move by Instagram is just one of many digital safety measures aimed at protecting young people online. It comes as governments are pushing up their own guardrails: Australia recently implemented the first ban in the world of social media accounts for children under 16 years of age, and the The United Kingdom is considering similar restrictions. These efforts reflect a broader trend of embedding technologies and policies to protect young people and their online use, although debates about privacy, autonomy, and effectiveness continue.




