Meta, the parent company of Instagram, has unveiled a series of enhanced safety features aimed at protecting teenage users on its platforms. The move comes amid rising pressure on social media companies to prioritize the mental health and online safety of younger audiences.
Announcing the changes on Wednesday, Meta revealed it had taken down thousands of adult-run accounts that were either posting sexualized comments or soliciting explicit images from children under 13. According to the company, 135,000 accounts were flagged for inappropriate comments, while another 500,000 were identified for engaging in harmful interactions with minors.
Among the new features is a simplified option for teens to block and report accounts with just one tap. Meta also introduced in-message safety notices that remind teens to remain cautious in private chats and to report anything that makes them uncomfortable. The company noted that over one million accounts were blocked and another million reported by teens after seeing these alerts.
Meta’s safety push also includes the use of artificial intelligence to detect when users misrepresent their age. The system flags accounts suspected of being underage and automatically converts them into teen accounts, which come with stricter safety settings. Teen profiles are set to private by default, and direct messaging is restricted to people they already follow or have mutual connections with.
These latest safeguards follow actions taken in 2024, when Meta made all new teen accounts private by default as part of its broader child protection strategy.
Despite the improvements, Meta remains under intense legal scrutiny. The company is currently facing lawsuits from dozens of US states, which accuse it of deliberately designing addictive features on Facebook and Instagram that negatively affect children’s mental health.













