With the release of Sora 2, the landscape of “deepfakes”, technology that’s designed to deceive the viewer, just got a lot more complicated. For years, experts repeatedly warned of the consequences of deepfake technology and how it could be used for mass manipulation and propaganda. And, with the release of AI models that make videos seem eerily real, that time has now arrived.
Already bad actors are releasing videos on social media showing black women as angry SNAP recipients with 10 kids and fraudsters playing on the same decades-old racist tropes. The production costs are negligible, but the psychological impact is profound. Every time such a clip is watched, shared, commented on — even disbelieved — it sinks a little deeper into the sediment of public imagination.
Democratizing Manipulation
What makes this moment distinct is not only that fakes can be made — but that they can be made instantly and personally. The democratization of deception means that anyone, anywhere, can fabricate a moment that feels authentic: a candidate caught uttering a slur; a protestor torching a building; a public servant pocketing cash. The images are plausible enough to inflame, too perfect to ignore. And unlike the propaganda of the past, these aren’t broadcast en masse — they’re algorithmically tailored, microdosed to each of us according to our fears and desires.
Once, seeing was believing. Now, seeing is merely an invitation to argue. The philosopher Jean Baudrillard warned, long before the internet existed, that we would one day live in a world of “simulacra” — copies without originals. That day is here. The danger is not only that we’ll mistake the fake for the real, but that, overwhelmed by uncertainty, we’ll begin to believe nothing at all.
This corrosion of trust isn’t an abstract thing. Democracies depend on shared notions of truth, however fragile. Once those erode, governance becomes theater, and the loudest, most convincing performance wins. In that light, the emergence of AI-generated video feels less like an innovation and more like a constitutional crisis. Imagine the next election cycle: a video surfaces of a presidential candidate confessing to a crime. Within minutes, it floods social media, is reported by local outlets, and shared across partisan networks. Even if debunked, the damage is irreversible. The lie will have done its work.
The press, once the final arbiter of verification, now competes in a landscape where falsehoods move at the speed of light and truth limps along, fact-checked and half-believed. Reporters must fight not only the fabricators but the fatigue of audiences who have lost faith in what’s real. The phrase “that’s a deepfake” has already become the new “fake news” — a get-out-of-truth-free card for anyone cornered by evidence.
Regulation is Messy
But perhaps the most devastating casualties are personal. Women, especially, have borne the brunt of deepfake abuse—their faces grafted onto pornographic videos, their reputations shattered overnight. Victims have little legal recourse, because the law, like the rest of us, is still catching up. Is all of this the mistake of technology itself or of society? The result is the inevitable expression of a culture that builds faster than it thinks.
The great irony is that the same AI engines capable of fabricating deception can also, in theory, detect it. Researchers are racing to build watermarking systems, digital provenance tools, forensic algorithms that can trace authenticity through pixels and metadata. Yet it’s hard not to feel that the arms race between creation and detection will always favor the former. As one researcher told Nature, “Detection is an uphill battle against an opponent that learns faster than we do.”
Governments are groping for the right response. The European Union has moved toward requiring clear labeling of AI-generated content, while China has mandated watermarks on synthetic media. The United States, predictably, lags behind — torn between fears of censorship and fears of manipulation. Finally, we got a hopeful sign in the form of the Take It Down Act. The Take It Down Act, signed into law last May, makes it a federal crime to knowingly publish or threaten to publish porn without a person’s consent.
Meanwhile, platforms like YouTube and X (formerly Twitter) promise moderation and transparency, but their algorithms continue to reward virality, not veracity. In the attention economy, truth is an unprofitable product. This, more than anything, may be the defining feature of our time: not the abundance of lies, but the apathy they induce. Each new fake sows not outrage but resignation. We begin to treat unreality as the background radiation of daily life. It’s the same emotional numbness that accompanies the news of another mass shooting, another melting glacier — a fatigue so deep it feels like acceptance.
And yet, resignation is precisely what the manipulators depend on. The goal is not to convince us of any particular falsehood, but to convince us that truth itself no longer matters. In that sense, the real threat of AI manipulation isn’t disinformation; it’s nihilism. The citizen who believes nothing is easier to rule than the citizen who believes the wrong thing.
What would it take to resist? Some of the answers are technical — watermarking, authentication, provenance chains. Others are cultural: rebuilding our literacy, teaching skepticism without cynicism, reminding ourselves that discernment is not paranoia but citizenship. But the deeper work will be moral. We will have to decide, collectively, that reality is still worth defending — that shared truth is not a relic of the analog age but the foundation of any humane future.
There is still a narrow window to act. The same public indifference that stood in the way of the cleanup of lead and the regulation of tobacco could easily delay our response to this invisible toxin. But the stakes are higher now. Once trust is lost, there’s no vaccine to restore it.
When historians write about the early 21st century, they may not remember it as the age of social media or even the age of artificial intelligence. They may call it something darker — the age when reality itself became optional. And they will wonder, as we now wonder about the people who drank from lead pipes, how we could have known so much, and cared so little.




