How states can enforce social media age limits without sacrificing privacy

pixdeluxe via Getty Images
COMMENTARY | For state lawmakers across the nation, the debate isn’t whether to act on children’s online safety, but how to best enforce protections.
Concerns are mounting over social media’s impact on children’s well-being. Today, kids spend an average of 5 to 7 hours per day on screens, and the unrestricted nature of the internet has exposed them to a slew of dangers, like sexually explicit, drug-related content, cyberbullying and more. Research shows that excessive use of social media platforms in children has also been associated with mental health challenges.
For state lawmakers across the nation, the debate isn’t whether to act on children’s online safety, but how to best enforce protections.
As of August 2025, 13 U.S. states have passed laws restricting children’s access to social media with more on the way. While these laws may seem straightforward on the surface, implementation has been murky. How can sites verify who is underage without infringing on users’ privacy and making the sign-in process cumbersome? And how should platforms implement varying restrictions across a patchwork of different states?
A Flawed Age Verification Landscape
When many people think of online age verification, self-declaration methods come to mind. Often, this relies on a single question: “When is your birthday?” Passing these checks can be as easy as clicking a button or two. Mandated by the Children’s Online Privacy Protection Act, passed in 1998, these checks were designed for a different era of the internet, years before the first mainstream social media sites, such as Friendster, Myspace and Facebook, had even launched.
In recent years, more secure methods, like document verification, have emerged. This includes a user uploading a government-issued photo ID, which will be compared against a selfie that the user uploads. However, this method has its limitations. For one, many children do not have government IDs, making it impossible to verify their age against a trusted source.
Some advocates also warn that document verification may infringe on the rights of “legal” adult users and leave them exposed to privacy vulnerabilities. These fears are not completely unfounded. Take, for example, the recent data breach of Tea, a women’s dating safety app. The breach exposed hundreds of ID photos and other personal data, leaving users vulnerable to harassment, online stalking and identity theft.
Platforms themselves have also expressed concern about the way many of these state laws are currently written. Bluesky, for example, blocked access for users of all ages in Mississippi, claiming that ID requirement laws in the state would “fundamentally change how users access Bluesky.” Bluesky also expressed concern about creating unique compliance systems for each jurisdiction. This signals a broader challenge: fragmented state policies may unintentionally create a checkerboard internet, complicating compliance for platforms and leaving regulators with uneven enforcement. Geolocation can be used to determine exactly where a user access request is coming from – and therefore, what age restrictions may or may not need to be upheld given the laws of individual states – but VPNs make this very easy to bypass.
For state leaders, the question becomes: How do you balance the legitimate goal of child safety with the operational realities of enforcement?
A Smarter Path Forward
Age verification tools aren’t failing us; what’s failing is the mindset behind the government’s enforcement and platforms’ implementation. Too often, platforms treat age verification as a regulatory hurdle or “a compliance check” rather than a core component of user safety. The “move fast and break things” approach may help tech companies scale – but when kids’ lives are at stake, platforms need to be secure by design.
States have an opportunity to shift the conversation by offering implementation guidelines. Any company storing sensitive data – not just banks or government agencies – needs to think and act like an identity verification provider, ensuring that safety is a core feature, not an afterthought. This is a win-win for everyone involved: States can keep their younger constituents safe; older, authorized users can enjoy privacy; and online platforms can safeguard their reputation and avoid costly breaches.
Legislators should also consider emerging technologies as a potential solution. Artificial intelligence-powered age estimation technology is a good example. This tech uses biometric analysis of a user’s facial features to estimate a user’s age. No sensitive personal documents are required, and the biometric data is discarded immediately after it is used. It’s a great low-friction supplement for platforms looking to strengthen compliance without overburdening users or exposing them to privacy risks.
State leaders are right to act on the risks social media poses to children, but fragmented rules and outdated verification tools create real challenges for both regulators and platforms. By looking toward AI-enabled solutions and promoting a secure-by-design framework, governments can help ensure that child safety laws are effective, privacy-preserving, and scalable. With the right guidance and technologies, state legislators can lead the way.




