India is taking a firm stand against the flood of fake media spreading online.
The government’s Ministry of Electronics and Information Technology has unveiled new draft regulations requiring every piece of AI-generated content—videos, images, or audio—to carry a clear, visible label marking it as synthetic.
As described in a detailed report by Al Arabiya, the move is aimed at curbing the rapid spread of deepfake videos that have recently gone viral across social media.
The proposed rule would make it mandatory for labels to cover at least ten percent of the visual area or be announced within the first seconds of an audio clip—so viewers know, right away, when something isn’t real.
It’s not just a suggestion; the government means business. Platforms hosting user-generated content will have to ensure creators disclose whether their uploads are made using AI.
If they fail to comply, companies could lose their legal protection under India’s IT Act—a scenario that, as explained in a Reuters analysis, might send shockwaves through Silicon Valley and beyond.
Recent months have seen a surge in convincingly fake clips of politicians, actors, and influencers. Officials worry that deepfakes could influence elections or inflame public tensions.
A policy overview from The New Indian Express noted that the Ministry is especially alarmed by how quickly such videos go viral before fact-checkers can debunk them.
In a country with over 900 million internet users, a few clicks can spark chaos.
Now, I’ve got to say—it feels like we’ve reached a breaking point. For years, people brushed off deepfakes as a niche tech gimmick.
But when you can’t tell a real leader’s speech from an AI-generated one, that’s no longer science fiction; that’s a trust crisis.
A brief from NDTV Profit explained that the new guidelines will also require platforms to maintain metadata identifying the origin of any synthetic media. No more stripping out the evidence.
Critics, however, are worried this could slow innovation. Startups might face extra compliance costs, while major platforms like YouTube or Meta could struggle to monitor billions of uploads.
Still, many experts argue the change is overdue. As one commentary from Hindustan Times put it, “If we can’t see what’s fake, we’ll soon stop believing what’s real.”
And honestly, that hits the nail on the head.
The global picture makes this even more interesting. Other regions, including the European Union and the United States, are exploring similar labeling requirements.
But according to a comparative piece from Channel News Asia, India’s 10 percent rule is one of the most specific and measurable standards yet proposed.
It could set a precedent that shapes how the world handles synthetic media.
It’s fair to ask, though: how will these labels be enforced? AI can generate millions of pieces of content daily. Detecting every fake is like catching smoke.
Yet, as digital policy researchers noted in Storyboard18’s coverage, the rules will initially apply to “Significant Social Media Intermediaries”—platforms with at least five million users.
That’s a pragmatic starting point, even if it doesn’t catch everything.
Personally, I think this move signals something bigger—a societal turning point.
The internet, once the wild west of content, is getting its first real sheriff in town.
These labels might not solve everything, but they’ll make people pause, think, and question. Maybe that’s what we’ve been missing all along.
For now, the draft remains open for public feedback. But if it becomes law, every video, meme, or clip created by an algorithm will need to wear its “AI-made” badge loud and proud.
And honestly? That might just be the most human thing we’ve done for the internet in years.


