Teens everywhere are familiar with the endless social media scroll—a thumb gliding instinctively across the screen. Whether it’s Instagram, TikTok, or other platforms, social feeds are ingrained in teen life. According to the Pew Research Center (PRC), 74% of teens believe social media helps them feel more connected to their friends and 63% use social media as an outlet for creativity. The PRC also finds that social media gives teens, especially marginalized teens, access to resources and mental health aids otherwise unavailable to them.
Still, social media can be harmful. In the same PRC survey, nearly half of teens believed social media negatively impacted them and damaged their sleep. Further, according to the U.S. Surgeon General, social media has been associated with higher rates of depression, anxiety, and low self-esteem.
In response to these risks, governments have implemented several age-limiting policies for these platforms. Unfortunately, these policies are lacking as they struggle to protect exposed groups without relying on censorship or sensitive data collection.
One example is the United Kingdom’s Online Safety Act. The act aims to limit the spread of both illegal and legal harmful content, such as online bullying, misinformation, propaganda, and recruitment into radicalized groups. While its mission is commendable, the bill gives little evidence to support its regulation, risks limiting democratic freedom, grants regulators excessive authority, and suffers from major enforcement flaws. Additionally, the ambiguous nature of the bill has raised concerns about censorship. Since the bill extends to legal but harmful content, it relies on the platform itself to classify and prohibit access to such content. That duty is triggered whenever a service “has reasonable grounds to believe that…there is a material risk of the content having, or indirectly having…adverse physical or psychological impact” as explained in the act. However, it does not elaborate on this broad scope with examples or objectivity.
The vagueness of the bill leaves a sudden burden on both enforcers and service providers to guess what is considered “reasonable risk.” But since these platforms face penalties and public pressure, they may act overly cautious in gray areas like political debate, identity, and health information. Without clear limits for these platforms, online safety bills can inadvertently censor lawful free speech.
The UK is not alone in its attempts at online intervention. Australia has imposed limits and pushed platforms to expand age checks. YouTube, for instance, now uses AI-backed age verification based on watch and search history, a policy that extends to the United States. Although YouTube has previously implemented restrictions on videos covering sensitive topics with demonetization and age restrictions, this policy now applies across the platform.
To verify age, users must provide a government-issued ID, a picture of themselves, or credit card information, a system also used on many U.K. platforms and forums. This method has raised concerns about the platform processing and selling private information. Age checks require IDs, selfies or face scans, credit-card details, names, birth dates, or biometric information. With companies like Google, Facebook, and Apple having recently suffered data breaches of over sixteen billion login credentials, storing sensitive information poses a clear risk.
While protecting teens is a noble goal, current restrictions are simply ineffective. Vague “reasonable risk” standards and age checks allow for censorship and the storing of sensitive information. Thankfully, the fix is straightforward. Lawmakers must define an objective list of harms with measurable severity. Age checks must preserve privacy with deletion within a guaranteed timeframe. Regular reports must be mandated to ensure sensible data retention and moderation. Until restrictions meet these basic requirements, though, the United Kingdom’s Online Safety Act cannot be modeled elsewhere, meaning platforms themselves must be responsible for ensuring both teen safety and civil liberties.
Categories:
The Implications of Social Media Restrictions
Half of teens believe social media has negatively impacted them.
Recent age-based limitations for social media have received major backlash from a range of communities
Story continues below advertisement
0
Donate to Wiss Media
Your donation will support the student journalists of Wissahickon High School. Your contribution will allow us to purchase equipment and cover our annual website hosting costs.
More to Discover
About the Contributor
Carina Ahn, The Trojan Times Beat Writer








