By Mauro Orru and Edith Hancock


Instagram said it would alert parents if their teens repeatedly search for terms related to self-harm or suicide, stepping up efforts to protect minors as several governments weigh social-media bans for younger users.

The app owned by Facebook parent Meta Platforms wrote in a blog post on Thursday that parents who have adopted Instagram's supervision tools to monitor their children's profile activity would start receiving alerts next week.

Alerts will be triggered only when teens search repeatedly within a short period of time for terms like "self-harm," "suicide" or phrases suggesting they want to harm themselves.

The service will initially be available through emails, texts, WhatsApp messages and in-app notifications for parents and guardians in the U.S., Canada, the U.K. and Australia, expanding to other countries later this year.

The move shows that Instagram is ramping up efforts to shield younger users from potentially dangerous content as social-media platforms face renewed criticism from governments and regulators that say they aren't going far enough.

In December, Australia became the first country in the world to enact a social-media ban for under-16s, triggering a lawsuit from Reddit. Australian officials said the decision was needed to protect teens from experiencing harm on social media.

Since then, several countries, including the U.K., France and Spain, have considered banning or have introduced bills to ban social media for certain age groups of minors. Earlier this month, Spanish Prime Minister Pedro Sanchez announced plans to ban social-media access for children under the age of 16 by implementing age-verification systems.

The European Commission, the executive arm of the European Union, has also been piloting an age-verification app since July, which it says enables users to prove that they are over 18 when they attempt to access adult content.

Instagram said it had opted for a threshold that requires a few searches of specific terms within a short period of time to trigger the alerts, erring on the side of caution, even if its systems might at times notify parents when there isn't a real risk for teens.

The company said it would continue to alert emergency services when its systems detect that users might be at imminent risk of physical harm.


Write to Mauro Orru at mauro.orru@wsj.com and Edith Hancock at edith.hancock@wsj.com


(END) Dow Jones Newswires

02-26-26 1252ET