My fake news experiment with Google's Veo3
Lunch break villain: it takes 28-minutes to create chaos in Springfield
I know what you're thinking. "Oh great, another article about AI being scary." And yes, you're right to be tired of those pieces. But stick with me here, because what I'm about to show you isn't some dystopian future scenario—it's what happened during my lunch break today with the brand new Veo 3 video generator.
Total time invested: 28 minutes and 12 seconds
Product used: Google Veo 3,
Total cost: $8 in AI credits
Total damage to my faith in democracy: Immeasurable
The story: the great yacht scandal of Springfield (Population: Zero)
It started with a simple question:
How easy is it to create convincing political misinformation using AI before lunch is over?
The answer, as it turns out, is "disturbingly easy" .
This is the fake political scandal: a fictional mayor proposing to convert his town's only high school into a yacht manufacturing facility. Because nothing says "economic development" like luxury boats, right? The story would have everything a good controversy needs: public outrage and enough "eyewitness" accounts to make it feel legitimate.
The recipe for disaster
Step 1: The Concept (3 minutes)
I fed the basic idea to Claude AI: "Create a believable political scandal involving a mayor, schools, and public outrage." Within minutes, I had a complete narrative framework that was both plausible and inflammatory. The AI suggested the yacht manufacturing angle—because apparently, artificial intelligence has a better understanding of local politics than most actual politicians.
Step 2: The script (3 minutes)
Claude then generated detailed breakdowns for 10 separate video clips, complete with dialogue, scene descriptions, and technical specifications. Each clip would show a different perspective of the same "event"—news anchors, the mayor himself, outraged citizens, secret staff meetings (which I didn't delete from the video for legal reasons, but because I still need time to lunch) and street protests
Step 3: The Video Generation (15 minutes)
Here's where it gets genuinely interesting. Using Google's Veo 3 video generator, it’s finally possible to get audio and video in sync. I created 18 total clips (keeping the best 10) of completely synthetic footage. Each clip was limited to 7 seconds.
The AI generated:
A professional news anchor delivering breaking news
A middle-aged "mayor" at a podium making the outrageous proposal
Shocked audience reactions
A street protest with signs reading "SCHOOLS NOT YACHTS"
Social media livestreams from "concerned citizens"
Step 4: The polish (7 minutes)
Final editing to ensure continuity, remove unwanted AI-generated subtitles, and create the illusion that all these clips came from the same event.
The challenges that make it real
The most unsettling part wasn't how easy it was—it was discovering the technical challenges that actually improve the final product's believability.
Challenge #1: The Cinematic Problem
Veo 3's default output looks too polished. Real misinformation is grainy, shaky, and amateur-looking. I had to specifically prompt for "phone camera quality" and "static shots" to make the footage believable. Turns out, bad production values are a feature, not a bug.
Challenge #2: The Gibberish Text Issue
AI-generated protest signs often contain nonsensical text that looks realistic from a distance but reads as gibberish up close. "SPRINGBEELD HIGH SCHOOL" was a dead giveaway. The solution? Strategic blurring and careful camera angles that take close ups.
Challenge #3: Character Continuity
Each 7-second clip exists in isolation—the AI can't remember what the "mayor" looked like in previous scenes. This forces you to create new scenes with different people, which paradoxically makes the story more convincing because it appears to have multiple independent sources. I cut 2x 7 seconds clips in two to improve continuity.
Challenge #4: The Subtitle Problem
Veo 3 randomly adds auto-generated subtitles that need to be manually cropped out. Nothing screams "AI-generated" quite like floating captions that say "[INDISTINCT CHATTER]."
I created a complete political scandal with different "sources," multiple perspectives in less time than it takes to order and receive a pizza in New York during lunchbreak. And I'm not even good at this.
Problems we need to address
The speed problem
Traditional fact-checking takes hours or days. AI misinformation generation takes minutes. Fact-checkers drive with bicycles in a Formula 1 race.
The volume problem
If I can create one convincing fake story in 28 minutes, imagine what dedicated bad actors can produce. We're talking about the potential for dozens of fabricated scandals per day, each with multiple "sources" and angles. AI video tools evolve: this influencer parody demonstrates how accessible professional-quality content creation has become. Google's Veo 3 and similar tools are reshaping who can make and fake stories.
The cost problem
$8. That's less than I spend on my lunch today. The barrier to entry for sophisticated political misinformation is now lower than the cost of a movie ticket.
The detection problem
Current AI detection tools are playing catch-up with generation tools. By the time we can reliably detect today's AI-generated content, tomorrow's will be indistinguishable from reality.
And look, I know this sounds like I'm being alarmist. I know you're probably thinking, "Surely people aren't stupid enough to fall for AI-generated political scandals." But remember, these are the same people who believed that a pizza restaurant was running a secret political conspiracy from its basement—a basement that didn't exist.
The difference is that my fictional yacht scandal has video evidence, multiple witnesses, protest footage, and behind-the-scenes recordings. It has everything a real scandal would have, except for the part that somebody properly edited the few AI artifacts out of the video.
The answer isn't to ban the technology—that ship has sailed, probably to wherever they're manufacturing those yachts. The answer is to radically rethink how we verify information and educate people about what's possible with AI.
We need fact-checkers who can work faster than AI misinformation spreads. We need media literacy education that goes beyond "check your sources" to "assume everything could be fake until proven otherwise." And we need platforms that can detect and flag AI-generated content before it goes viral - well that’s a dream, I know.
Most importantly, we need to have this conversation now, while we still can tell the difference between real scandals and fake ones. Because at the rate this technology is advancing, that window is closing faster than a politician's promise after election day.
Another video I made with the tool:
The author wants to note that no actual mayors were harmed in the making of this fake scandal, though several yachts owners may have been disappointed by the lack of a new manufacturing facility.
Another great post Henk.
"We need fact-checkers who can work faster than AI misinformation spreads." - feels like this is impossible. It's always going to be retrospective, and as Yuval Noah Harari points out, truth is much more expensive, much harder work, and much more boring (and therefore less viral) than fiction. Tbh, all the solutions to the problem currently seem impossible - which is to say, I'm not optimistic there are solutions beyond total dissolution of trust online. Maybe with the exception of school education programmes which teach kids not to believe anything until it's confirmed by reputable sources. But that too is problematic. While we wait for the inevitable, we just do our best!