Senator Josh Hawley is at it again, this time with a draft law that zipped around Capitol Hill and could upend what Americans can say to machines.
The proposal, known as the Guidelines for User Assurance and Responsible Dialogue – the GUARD Act – would ban AI companion chatbots for minors and require all conversational systems to publicly disclose they’re not human.
The saga began when word got out that Hawley’s office was privately circulating the document, causing eyeballs and spyglasses to snap into focus all over the region.
You can almost hear the groans from Silicon Valley as companies across the tech industry brace themselves for what could easily be a significant shift in regulation – all laid out here in detail, exclusively.
The bill itself, in its core elements, is simple; its implications are anything but.
Services featuring AI companions would have to include robust age-verification mechanisms to ensure that the under-18s cannot interact with emotionally manipulative or suggestive bots.
There’s also a provision calling for chatbots to inform the suggestee loud and clear if it is an AI, again not once but consistently throughout interactions.
Hawley’s thinking, according to people familiar with the proposal, stems from multiple hearings, in which parents have described chilling scenarios involving children and “empathetic” chatbots that have gone awry.
It sells itself as a line in the sand, though to some critics it feels more like a wall.
And across the country, California has just done something that says the same thing.
Now the state is forcing digital assistants and chatbots be up front about not being real – a development that’s sending ripples way beyond tech circles.
The new rule, as outlined in newly passed legislation in California, doesn’t prohibit chatbots for those under 18 but does require full transparency.
It’s a big move, and actually quite a shrewd one. Rather than telling you “no,” it tells you “know” – let the user see what they’re getting and then let them also see all that other nasty stuff.
Not every bill has gone quite as smoothly, however. Gov. Gavin Newsom recently vetoed a more stringent proposal that would have outright barred children from using AI chatbots, arguing it was “overly broad” and might inadvertently block educational or therapeutic devices.
It’s a tricky dance, reflected in his rejection of the highly controversial measure.
Nevertheless, Newsom signed a number of other AI-related bills into law, among them regulations aimed at deepfake pornography and fake news videos – moves that highlight the delicate balancing act states are trying to achieve before Congress acts.
Across the Atlantic, however, the European Union is going in reverse – not only regulating A.I., but also spending big on it.
In November, the Euree-commission announced a €1 billion plan to supercharge AI development in areas from health to energy, as part of a broader effort to lower reliance on foreign tech.
The unveiling of the Apply AI Initiative, included in Europe’s new push for digital sovereignty, suggests that policymakers there are simultaneously betting on control and growth.
It’s a whole different tone from Hawley’s hardline stance - maybe more carrot, less stick.
The irony is that all these efforts are happening as the International Monetary Fund has been warning most countries still lack the right ethical or regulatory frameworks to handle technology’s impact.
In a recent speech at the IMF, managing director Kristalina Georgieva got straight to the point: we’re are building AI faster than we are building the guardrails.
That hits hard. It is not just a U.S. problem – it is a global problem.
Personally, I get it. Lawmakers are moving fast to get kids under wraps before they unravel, but I can’t seem to quit thinking about fear and how it is at the wheel behind all of this.
Just as blanket bans punish everyone for the sins of a small minority. Wouldn’t it make more sense to require design ethical standards, independent audits of transparency and user education instead?
We could make chatbots safer without throwing them out the window. (Though there’s something to be said for politicians paying attention before the damage becomes truly irreversible – that in itself would feel new.)
Should this bill start to gain traction, it would spark the first real national debate on emotional AI.
Perhaps that’s what we need: not another moral panic, but a messy, human conversation about what role machines should actually play in our lives.


