Skip to main content
14 July, 2025

VEO3 and the Real Threat of AI Video

The world of technology is no stranger to breakthroughs, but every so often, something comes along that truly feels revolutionary. Google’s VEO3 is one such marvel. As an enthusiast of both technology and storytelling, I can’t help but feel a surge of excitement at the possibilities this tool unlocks. VEO3 allows anyone to turn a single photo into a vivid, cinematic video, complete with synchronized dialogue, ambient sounds, and remarkably realistic visuals. The process is seamless and intuitive, lowering the barrier to entry for high-quality video production in ways that were unimaginable just a few years ago.

In only a few weeks since its launch, tens of millions of VEO3-generated videos have flooded social media, sparking viral trends and a wave of user-driven content. It’s not just about speed and accessibility; it’s about democratizing creativity and giving everyone the power to tell their stories in stunning new ways.

Since everyone’s raving about VEO3 (including ourselves), I thought it would be productive to voice a note of caution, particularly the threat it poses to our trust in media.

Perhaps the most profound worry centers on the erosion of public trust in video evidence. For decades, video has served as one of society’s most reliable forms of documentation. Now, with VEO3’s ability to generate hyper-realistic videos that are nearly indistinguishable from genuine footage, that trust is under threat. We are entering an era where the authenticity of any video can be called into question. This phenomenon, sometimes called the “liar’s dividend,” means that as fakes become more convincing, even real videos can be dismissed as fabrications. The implications for journalism and social accountability are immense. When anything can be faked, everything can be doubted, and the very foundation of truth in media begins to crumble.

The situation is further complicated by the accessibility of VEO3. No longer is sophisticated video manipulation the exclusive domain of experts. Now, anyone with a smartphone can create convincing deepfakes. This democratization of deception makes it exponentially harder for fact-checkers, journalists, and the public to separate truth from fiction. In a world already struggling with misinformation, the ability to fabricate events, fake eyewitness accounts, or stage political incidents with a few taps could have devastating consequences. During sensitive periods like elections or social unrest, AI-generated videos can go viral before anyone has a chance to debunk them, shaping public opinion in ways that are difficult, if not impossible, to reverse.

These risks are not abstract. In regions with existing ethnic or religious tensions, such as Indonesia, the potential for AI-generated videos to inflame divisions or incite violence is very real. Social fault lines are easily exploited by convincing fake content, and the rapid spread of misinformation can have dire consequences for peace and stability.

While Google has implemented safeguards such as visible and invisible watermarks (using SynthID) and has promised to release public detection tools, many critics argue these measures fall short. Watermarks can be cropped or obscured, and detection tools are not yet widely available or foolproof. Even with some restrictions on prompts, journalists and researchers have demonstrated that it is still possible to create provocative or misleading videos with minimal effort. The technology is evolving faster than the tools designed to keep it in check.

The legal system, too, is struggling to keep pace with VEO3’s rapid evolution. The use of copyrighted material in AI training, and the ease with which deepfakes of public figures or celebrities can be created, raise unresolved questions about ownership and consent. There are also growing concerns about the spread of non-consensual explicit content, impersonation scams, and other malicious uses that current safeguards may not adequately prevent.

Taken together, these concerns paint a sobering picture. Misinformation, the erosion of trust in media, the limitations of current safeguards, creative and technical shortcomings, and unresolved legal and ethical issues all demand careful consideration. The excitement around VEO3 is well-deserved, but so too is the caution.

Strong clear rules are needed to address the new forms of misinformation and manipulation that VEO3 makes possible. Publicly accessible and reliable tools for detecting AI-generated content are essential, not just for journalists and fact-checkers, but for everyone. Media literacy efforts must help people understand both the capabilities and limitations of AI video, fostering a more skeptical and informed public. Ongoing dialogue among artists, technologists, ethicists, and affected communities will be crucial as we learn to use—and constrain—these powerful tools.