Across Florida, teenagers use artificial intelligence (AI) chatbots every day. Some use it for fun, asking it to settle arguments about which beach is worth the drive. Others rely on it for help with their Common App essays because their school counselor can’t meet for weeks. Others live in areas without accessible mental health resources and turn to AI when no one else is available.
In the vast majority of cases, teenagers aren’t using AI as a replacement for human connection. They’re using it because it’s there when humans aren’t.
Despite this wide usage, lawmakers in Washington could soon fundamentally reshape the relationship Florida’s teens have with Chatbots. Under the GUARD Act, introduced by Senator Josh Hawley with rare bipartisan support, every AI chatbot would be required to implement age verification for all users, and minors would be prohibited from accessing any AI companion, defined as any chatbot designed to simulate “interpersonal or emotional interaction, friendship, companionship, or therapeutic communication.” Violations carry civil penalties of up to $100,000 per offense.
Over the past few years, AI chatbots have been implicated in several horrific episodes of self-harm and violence. Suicides and shootings across the country have left the American public calling on Congress to act. Additionally, the American Psychological Association has raised legitimate concerns about the technology’s effects on adolescent development, specifically the fear that “adolescents’ relationships with AI entities also may displace or interfere with the development of healthy real-world relationships.”
But good intentions are not good legislation. The GUARD Act responds to real concerns by casting the widest possible net, and a bill that tries to protect everyone from everything tends to protect no one from anything.
The bill’s definition of a prohibited AI companion is expansive enough to capture a range of AI applications its authors almost certainly didn’t intend to target. Under current definitions, an AI tutoring tool that helps students improve their grades when teachers aren’t available would be covered. So would a mental health app that a teenager in rural Florida uses because the nearest therapist is forty miles away. As would a communication aid used by an autistic teenager to practice social interactions. The bill carves out chatbots limited to “a narrow specified purpose,” but the line between a narrow purpose and a companion-style interaction is ambiguous, and the safest corporate response is to verify everyone.
In drafting GUARD, the bill’s sponsors failed to consider the reality that teenagers are sophisticated users of technology, and will always find a way to circumvent restrictions. When Australia and the United Kingdom imposed age-verification mandates on social media, teenagers routed around them within days using VPNs – tools that are free, widely available, and take minutes to install. The mandates didn’t stop determined minors. They stopped compliant platforms and lulled parents into a false sense of security.
The GUARD Act also rests on an assumption that children can be protected from technology that will shape the rest of their lives. Today’s teenagers will be using AI as part of their future careers, whether Congress acts or not, with AI fluency expected to be a baseline professional expectation by the time today’s teenagers enter the workforce. The students who develop the habit of engaging with AI now, testing its limits, identifying its failures, and learning to use it safely and critically, will be equipped for that world. The ones told by federal law that it wasn’t for them yet will not be. Sheltering a teenager from AI in 2026 is roughly as productive as sheltering them from the internet in 1999.
Congress has convinced itself that the responsible thing to do is to stand between American teenagers and artificial intelligence. It isn’t. Teenagers who want access will find it – through VPNs, foreign apps, and platforms with no safety infrastructure at all. The GUARD Act doesn’t keep teens off AI. It keeps them off the platforms with crisis protocols, content filters, and disclosure requirements, and pushes them toward ones with none. That isn’t protection, it’s abandonment dressed up as legislating.
Congress has convinced itself that the responsible thing to do is to stand between American teenagers and artificial intelligence. It isn’t. The responsible thing is to ensure that teenagers encounter AI with guidance and the institutional support that platforms currently provide. The GUARD Act would see Washington walk away from this responsibility, leaving America’s teens to engage with it unsupervised. That isn’t protecting teens, and it’s a responsibility that the bill’s sponsors have failed to consider.










