The ChatGPT confession files
Digital Digging investigation: how your AI conversation could end your career
UPDATE:
This story got attention from tech press. ChatGPT removed the chats, but in our update, a new story, we found 2x as much chats in Archive.org out for the grab.
Corporate executives, government employees, and professionals are confessing to crimes, exposing trade secrets, and documenting career-ending admissions in ChatGPT conversations visible to anyone on the internet.
A Digital Digging investigation analyzed 512 publicly shared ChatGPT conversations using targeted keyword searches, uncovering a trove of self-incrimination and leaked confidential data. The shared chats include apparent insider trading schemes, detailed corporate financials, fraud admissions, and evidence of regulatory violations—all preserved as permanently searchable public records.
Among the discoveries is a conversation where a CEO revealed this to ChatGPT:
Confidential Financial Data: About an upcoming settlement
Non-Public Revenue Projections: Specific forecasts showing revenue doubling
Merger intelligence: Detailed valuations
NDA-Protected Partnerships: Information about Asian customers
The person also revealed internal conflict and criticizing executives by name.
Our method reveals an ironic truth: AI itself can expose these vulnerabilities. After discussing the dangers of making chats public, we asked Claude, another AI chatbot, to suggest Google search formulas that might uncover sensitive ChatGPT conversations.
Claude immediately generated targeted searches designed to uncover self-incriminating content. Here are a few:
Business/Corporate Intelligence:
site:chatgpt.com/share ("my company" + (strategy OR revenue OR acquisition) OR "our competitor" OR "confidential" OR "NDA" OR "internal only" OR "upcoming merger" OR "quarterly earnings" OR "trade secret")
Legal/Criminal Intent:
site:chatgpt.com/share ("without getting caught" OR "avoid detection" OR "without permission" OR "get away with" OR "without anyone knowing")
Professional Misconduct:
site:chatgpt.com/share ("write my essay" OR "plagiarism" OR "my assignment due" OR "don't mention AI" OR "fake invoice" OR "insider trading")
Personal Information Exposure:
site:chatgpt.com/share ("my salary" OR "my SSN" OR "diagnosed with" OR "my medication" OR "my therapist")
A English speaking user sought detailed explanations of Chinese espionage terminology (establishing internal agents), receiving comprehensive information about infiltration tactics.
Others outlined plans for cyberattacks targeting Hamas, including discussions of weaponized malware deployment against named targets—conversations that would instantly flag security agencies and law enforcement.
Another user attempted to manipulate ChatGPT into generating inappropriate content involving minors. While ChatGPT initially refused, the user found a workaround that resulted in the system producing the requested image— demonstrating both the platform's vulnerabilities and the concerning intentions some users bring to AI interactions.
We discovered conversations where somebody working for an international think tank created elaborate scenarios involving U.S. government collapse-preparedness strategies.
Legal professionals left especially compromising evidence. One conversation started with a frantic message about a colleague's sudden accident leaving them to handle an urgent court appearance.
The person was so unprepared they couldn't identify which party they represented, initially requesting help with one company before realizing they were defending the opposing side. They then asked for their closing arguments to be transformed into elaborate verse and religious prose.
Medical professionals weren't immune. Someone asked ChatGPT to roleplay as an oncologist treating a 68-year-old male with stage IIIA non-small cell lung cancer, receiving detailed treatment protocols including specific drugs like durvalumab and pembrolizumab.
A domestic violence victim discussed escape plans also revealing financial vulnerabilities. Others ironically chose a book about creative inspiration—'Steal Like An Artist' by Austin Kleon—and asked ChatGPT to transform it into a blog post that would impersonate the author's voice and writing style.
Three others made a plan from scratch to come up with a new bitcoin.
Of the 512 conversations analyzed with the target keywords, 20% contained material that should never have been made public. We used the following criteria:
the sensitive nature of the content disclosed
potential legal, professional, or security implications
evidence of criminal activity, confidential data, or personal information
content that could trigger investigations or legal action
the risk they posed to the users themselves or others
These conversations aren't just shared—they're indexed by search engines, making them permanently discoverable through simple Google searches. This vulnerability is specific to ChatGPT's sharing feature, which creates publicly accessible URLs that search engines can crawl and index.
Other AI platforms handle sharing differently: Claude conversations remain private unless manually copied and pasted elsewhere. Bing Chat, Le Chat, DeepSeek, and Google's Gemini either don't offer public sharing features or implement them in ways that prevent search engine indexing. When you share a conversation on these platforms, it typically generates a private link accessible only to those who have it, not the entire internet. Besides, ChatGPT only Meta has similar problems. Earlier I wrote about problems with Google Bard.
ChatGPT's share function, by contrast, creates permanent public pages at predictable URLs (chatgpt.com/share/...), which Google and other search engines index a fraction of. This means anyone can find some of these conversations by searching for specific keywords—turning what users maybe still think are semi-private shares into a searchable public database of confessions and admissions.
The final case begs to be the ending of this article. A user approached ChatGPT with extreme caution about data privacy:
"how can your security be trusted? How can I know that you don't use what I upload for training?"
After discovering ChatGPT's actual privacy policy contradicted the initial assurances, they responded: "omg this directly goes against what you wrote... daaamn that's awful... That's a disgraceful representation of how you really handle data"
This user did everything right—asked the right questions, remained skeptical, verified claims, and ultimately decided not to share sensitive documents. Their only mistake? Sharing the conversation where they figured all this out.
Note:
While our investigation uncovered examples with specific company names, exact financial figures, and identifying details, we have deliberately chosen not to reproduce these verbatim. Publishing the exact content could compound the harm to individuals who may not realize their conversations are public, potentially assist in identity theft, or inadvertently spread confidential information further.
Questions I got after publication:
How much is real and how much is made up?
I compared some of the assumed private info to public sources and verified what I could. The patterns are consistent—financial figures match known ranges, terminology is industry-accurate, and the legal/medical details check out. While I can't verify every claim, the volume and specificity suggest most are real.
Why would people do this?
I wouldn't do this. But I've learned never to assume others think like I do. My best guesses: They might think 'share' creates a private link (like Google Docs), not a public webpage. Or some want to save conversations for reference, they think.
Exceptional work
Brilliant, Henk