Visual investigations: here's how to do it.
"60 to 70% of visual forensics is human, not tech,” says Henk van Ess
Video has become one of the most powerful forms of evidence in modern journalism. At the exact same moment, AI is making that evidence easier to fake and easier to manipulate. The result is a new reality: Journalists must become visual investigators.
The killings of Renee Good and Alex Pretti by federal officers in Minneapolis. Brutal takedowns of lawful citizens and aggressive clashesby Immigration and Customs Enforcement across the U.S. The detention of Tufts University doctoral student Rümeysa Öztürk, snatched off a street by agents. A chilling video of a masked, armed suspect in the kidnapping of Nancy Guthrie caught on a doorbell camera.
I produced a realistic doorbell camera video of a similarly masked assailant in a few minutes using the AI video generator Sora. That’s a problem. Government officials have repeatedly spun false narratives about these events that don’t hold up when real video comes out. That’s another problem.

I was interviewed by Alex Mahadevan for this article. It’s reprinted with permission from The Poynter Institute, a global nonprofit working to address society’s most pressing issues by teaching journalists and journalism, covering the media industry and promoting fact-checking and media literacy. Journalists can access Poynter’s gold-standard professional development for free and many other exclusive benefits through our membership program. Keep up with the media industry through our free Poynter Report newsletter.
“It’s tough that the rise of AI-generated images coincides with the rise of the need for video from the spot,” said Anna Boone, digital designer at The Minnesota Star Tribune and a member of the team that analyzed video of Pretti and Good’s killing, in a recent livestream. “Everything is going up together and just creating a mass amount of content.”
The New York Times, Washington Post, CNN and the Star Tribune have tackled both issues with visual investigations — using tools and techniques to verify authenticity, debunk AI-generated content and analyze videos frame by frame. Every newsroom now needs this capability.
The Star Tribune’s work on the Good and Pretti shootings is a case in point. A team of visual journalists deconstructed bystander footage frame by frame, directly contradicting claims made by members of the Trump administration about both killings.
Their process was meticulous. Video journalist Amanda Anderson organized incoming clips on an editing timeline, using audio markers — the visible waveform spikes created by each gunshot — to sync footage from different angles. (The whole Star Tribune livestream is well worth your time, and a good investment in building visual investigation expertise.)
The team also knew what not to publish. In the Good case, they debated whether they could report that the officer who shot her, Jonathan Ross, left the scene. They could see his haircut across multiple videos but lost sight of him for a few frames between clips, so they held back.
That discipline makes visual investigations credible enough to challenge official narratives.
“It was quite clear that there needed to be work out there that was more analytical than the deluge of video that we were seeing on social media,” said Star Tribune graphics reporter Jake Steinberg.
The good news: News outlets don’t need a six-figure investigation budget or a team of 20 to start doing this work.
Archive everything
The first thing any reporter should do when they encounter newsworthy video is save it — immediately and in its original form.
“I think it was pretty clear to many of us, because of the fast-paced nature of social media, the importance of just saving video, saving copies of the videos that we wanted to explore more to our own devices, just in case things got deleted or in case things got removed from social media by whomever,” Anderson said. “Just the need to collect information as quickly and as much of it as we could.”
Social media posts get deleted. Platforms compress and strip metadata. Government agencies have restricted access to evidence in both of the Minneapolis shootings. If you don’t archive the original file, you may lose it.
Free tools for archiving include the Wayback Machine, Archive.today and MediaVault, which logs and timestamps every page a journalist visits during an investigation. For video specifically, download the original file whenever possible rather than relying on screen recordings, which degrade quality and strip metadata.
Learn to geolocate and edit videos
“The best advice I can give is to simply do,” said Malachy Browne, enterprise director of the visual investigations team at The New York Times. “Follow a breaking news event where visuals are being shared on social media sites. Learn how to geolocate (determine where something happened) and chronolocate (and when it happened) and learn how to use the tools on Bellingcat’s investigative toolkit. Where was this video taken, when, by whom, why, what’s happening and how do you know?”
Geolocation is the bedrock skill for visual investigations. A few tools to get started:
Google Earth Pro lets reporters compare terrain, buildings and landmarks in a video against satellite imagery, including historical imagery to check whether a scene matches a claimed date.
Google Street View is invaluable for matching street-level details, such as storefronts, road markings and utility poles.
SunCalc uses shadow angles and the sun’s position to help verify the time of day a photo or video was takens.
Bellingcat’s OpenStreetMap search tool helps narrow down a location when you have limited visual clues — a church near a bridge near a gas station, for example.
AI itself is now being used for geolocation, too. A recent Bellingcat test of 24 large language models found that Google’s AI Mode outperformed all other models in identifying locations from photos. But the researchers cautioned that AI geolocation still requires human verification.
“Visual forensics techniques often overlap with open-source intelligence techniques, so take opportunities to learn those when you can,” said PolitiFact reporter Loreben Tuquero, a member of Poynter’s AI Innovation Lab. “Knowing advanced search techniques for social media platforms can help you identify who originally posted an image or a video, and whether their posting history contains dubious content.”
Editing tools help, too. Programs like CapCut and Adobe Premiere Procan slow down video, isolate frames and analyze audio markers — the same techniques Star Tribune visual journalists used in the Minneapolis shootings investigation.
Start with the right questions
“So, my biggest message for local newsrooms: 60 to 70% of visual forensics is human, not tech,” said Henk van Ess, an international expert in open-source intelligence and AI-powered investigations who wrote the Global Investigative Journalism Network’s guide to detecting AI-generated content. “You don’t need a Ph.D. in AI. You need reporters who look closely and ask the right questions.”
Van Ess said before reporters reach for a tool, they should begin with a set of questions. He identified several categories of detection that journalists can learn to spot.
Provenance: Where did this first appear? Can you trace the original upload?
Time and date: Do the weather, lighting and shadows match the claimed time and place?
Location: Are landmarks, signage and street layouts consistent with the claimed location?
Technology: Are devices and infrastructure consistent with the time and place? Is the video quality consistent with the camera that allegedly captured it?
Behavioral patterns: Do people move and interact naturally, or does something feel scripted?
Physics: Do reflections, shadows and fine details — fingers, teeth, text — hold up?
InVid, TinEye and Google reverse image searching can help you track down the original source of an image to answer many of these questions.
Then comes what Van Ess calls a gut check.
Does the timing, framing or narrative feel too convenient? He calls this the “production quality paradox” — when content looks too polished or arrives too neatly to fit a narrative, that itself is a signal.
Develop an eye for AI
During the Star Tribune livestream, Steinberg noted that the Department of Homeland Security used AI to alter a photo of agents arresting a woman in Minneapolis to make it look like she was crying.
When the government released a photo of the gun it said was recovered from Pretti, “there were immediate, sort of innate questions about, you know, is this real? Is this verifiable?” Steinberg said. “To have to ask it of visual evidence, which we’re so used to being able to rely on, is kind of a new thing we have to do.”
Several tools can help reporters check whether content was generated by AI:
Hive Moderation: A free browser extension that scans images, video, audio and text for AI generation. It returns confidence scores and identifies the likely AI model that produced the content, such as Midjourney, DALL-E or Stable Diffusion. It’s a strong first-pass tool for deadline work.
Google’s SynthID: A watermarking system embedded in content generated by Google’s own AI tools, including Gemini and Veo. One quick method: Upload an image to a Gemini chat and simply ask whether it was created with Google AI.
Van Ess’s Image Whisperer: Runs parallel analysis using large language models and Google Vision processing.
No single tool is definitive. Van Ess recommends running content through multiple detectors and cross-referencing results, always paired with human-centered analysis.
Lean on experts
“Cultivate expert sources,” Tuquero said. “When a potentially AI-generated image or video doesn’t show any telltale signs, I reach out to people with the advanced expertise and resources needed to identify sophisticated AI fakes. These include academics and researchers who develop AI detection methods.”
Building a network of AI researchers, forensic analysts and open-source investigators takes time, but it pays off on deadline.
Publications such as Indicator, the digital outlet from fact-checking and digital investigation veterans Craig Silverman and Alexios Mantzarlis, can help. Indicator provides regular reporting on the tools, techniques and threat actors behind digital deception, along with monthly workshops for paid subscribers. Its resources page includes tools and a regularly updated academic library on AI-generated deceptive content.
“It feels like old school reporting may be the very best answer to ever more convincing deepfaked videos,” Mantzarlis said.


