In 2025, Instagram remains one of the most widely used platforms among teens, but mounting evidence shows that its safety features are failing young users.
Despite Meta’s promises of “Teen Accounts” and stricter protections, research and advocacy groups report that harmful content, predatory accounts, and algorithmic risks still reach underage audiences.
Investigations reveal that features meant to shield teens are either ineffective, easy to bypass, or inconsistently applied. This case study examines how Instagram’s safety systems fall short, and the risks these failures create for youth worldwide.
Methodology
This study draws on a mix of independent audits, academic research, advocacy investigations, and platform disclosures:
- Investigative Reports: Reuters found that only 8 of 47 tested teen safety features worked as