Exposing AI Manipulation in fast fashion
Luigi Mangione: from alleged CEO killer to unwitting style icon
It began today with a question from Graham Fraser, senior technology reporter at BBC News. 'I was wondering—could I get your expert view on the image, and its origin?'
I am in the middle of a busy two-week stint in Berlin, working all day with barely any time to breathe, let alone investigate photos. But something about this image made me stop. The face staring back at me belonged to Luigi Mangione, the 26-year-old accused of murdering UnitedHealthcare CEO Brian Thompson.
Here is a man awaiting trial on federal murder charges, facing a possible death penalty, and now apparently moonlighting as a budget fashion model? The shirt he was "wearing" had been selling for $11.69, with multiple sizes already sold out:
The absurdity of it all—America's most notorious murder suspect of the moment helping move inventory for a Chinese fast-fashion giant—would have been comedy gold if it wasn't so disturbing. The product description read: 'Men's New Spring/Summer Short Sleeve Blue Ditsy Floral White Shirt, Pastoral Style Gentleman Shirt For Everyday Wear, Family Matching Mommy And Me.
A pastoral gentleman shirt. For the man facing federal murder charges.This wasn't just inappropriate; it was impossible. Mangione sits in Brooklyn's Metropolitan Detention Center. He couldn't have posed for fashion shoots between federal murder charges and potential execution.
Well, it did happen last month that Brazilian prisoners held actual fashion shows inside maximum security prisons, with supermodels from São Paulo Fashion Week strutting catwalks while convicted killers applauded their own crochet creations—but at least those inmates chose to become fashion designers.
My first thought was the obvious one: Mangione never chose to become a model. Someone made that choice for him, pixel by pixel. The real challenge was proving it with the kind of technical precision that would leave no room for doubt.
Introducing Image Whisperer
By coincidence, I released a new tool to help you check photos, called Image Whisperer.
Think of it as a lie detector for photos, but instead of measuring heartbeats, it measures pixels, nudges you in the right direction, hopefully and allows you to research further. Now, here's the irony: I'm using AI to detect AI. It's like asking a magician to spot another magician's tricks.
And no, it's not perfect—no detection tool is. As AI gets better at creating images, detection gets harder. It's an arms race. But there's something poetic about turning AI against itself, using its own intelligence to expose its deceptions. Think of it as fighting fire with fire, or in this case, fighting artificial intelligence with artificial intelligence. Time for the acid test:
The tool thought a few seconds and then came with the following verdict
The image is a product listing from SHEIN, a Chinese online fast fashion retailer. The model being used is Luigi Mangione.
I ran some other tests, and they all had the same outcome: the picture is fake.
How does this work?
Think of it like this: Every artist has a signature style. Monet painted with soft, dreamy brushstrokes. Van Gogh used thick, swirling paint. Even when they didn't sign their work, experts can identify who painted what by looking at these unique patterns.
AI models work the same way. Midjourney, DALL-E, Stable Diffusion—each one has its own 'brushstroke,' its own way of creating images. These aren't visible to the naked eye, but they're there in the data, hidden in how the pixels are arranged.
My tool uses a massive database of these digital fingerprints. It's like having samples of every forger's handwriting. When I feed it a suspicious image, it compares the patterns in that image to thousands of known AI signatures.
Looking at the results, the tool was 98% certain this was AI-generated, with an 81% probability it came specifically from Midjourney. The 9% face manipulation score suggested someone had taken Mangione's real face—probably from those courtroom photos plastered across every news site—and merged it with an AI-generated body wearing that floral shirt.
It's basically a digital Frankenstein: a real person's face stitched onto a synthetic body. The seams are invisible to us, but to a trained AI detection system, they might as well be glowing neon signs.
The verdict was clear: Luigi Mangione never wore that shirt. He never posed for Shein. Someone—or rather, some AI— sent him to work in fast fashion without his knowledge, consent, or ability to collect a paycheck.
I searched extensively for the original floral shirt pattern elsewhere online—on other retailers, in fabric catalogs, anywhere—but found nothing. This absence of any real product source further confirms what the detection tool already revealed.
Shein pulled the listing within hours of the BBC's inquiry, where I also comment on the case.
It probably won't be the last time someone exploits Mangione's image—or anyone else whose face dominates the news cycle.
The dark irony is that Mangione has become something of a folk hero to some, with supporters raising thousands for his legal defense and flooding social media with memes.
His face has value precisely because it's controversial—and AI makes it easy to exploit that value without permission. To prove how easy this exploitation has become, I fed Mangione's image to Grok's new Image to Video feature. Within seconds, it generated a video of him, modeling a bright white shirt. The technology doesn't care about consent, context, or consequences. It just creates.
We don't have to surrender our shared reality. We can fight AI with AI, and more importantly, we can use our brains. A federal detainee modeling 'Family Matching Mommy and Me' shirts? Come on.
In the Gospel of John, Jesus said 'the truth will set you free.' Ironically, the truth would literally set Mangione free from his unwitting modeling career. But in the age of one-click shopping, who has time for truth?