Back to Blogs
Daily AI News

That Private ChatGPT Conversation? A Judge Just Ruled It Can Be Used Against You

| 4 min read

A courtroom screen displaying a ChatGPT conversation as evidence while a prosecutor points at the jury

You told ChatGPT things you wouldn’t tell your best friend.

Your business strategy. Your legal questions. That thing you Googled at 2 AM but were too embarrassed to actually search for.

You assumed it was private. Maybe you even deleted the chat.

It doesn’t matter. A federal judge just ruled that your AI conversations can be seized, read, and used against you in court. And deleting them doesn’t mean they’re gone.

The Case That Changed Everything

A hand tapping a Delete Chat button on a phone while a judge's gavel strikes the bench

In February 2026, U.S. District Judge Jed Rakoff in Manhattan dropped a ruling that sent shockwaves through the legal world.

Bradley Heppner, former CEO of GWG Holdings and founder of Beneficent, was facing federal securities and wire fraud charges. He’d been using Anthropic’s Claude to prepare reports about his case — intending to share them with his defense team.

His lawyers argued the AI-generated documents should stay confidential. Attorney-client privilege. Standard stuff.

Prosecutors disagreed. Their argument was simple: no lawyer was involved in those chatbot sessions. And machines don’t qualify for privilege.

Judge Rakoff sided with prosecutors. He ordered Heppner to hand over 31 Claude-generated documents. His reasoning was blunt: no attorney-client relationship exists “or could exist, between an AI user and a platform such as Claude.”

Your AI is not your lawyer. It never was.

Same Day, Opposite Ruling

Split-screen of two courtroom desks: one case file stamped DISCOVERABLE in red, the other stamped PROTECTED in green, gavels mid-strike

Here’s where it gets confusing. On the exact same day as Rakoff’s ruling, a Michigan judge reached the opposite conclusion.

U.S. Magistrate Judge Anthony Patti ruled that a woman suing her former employer did not have to surrender her ChatGPT conversations. He treated her AI chats as personal “work-product” — essentially her own notes for the case. His reasoning: chatbots are “tools, not persons.”

Two federal judges. Same day. Opposite conclusions.

A month later, a Colorado court in Morgan v. V2X sided with the Michigan approach — protecting a self-represented plaintiff’s AI work product. But it added a twist: the plaintiff had to disclose which AI tool he used, and was barred from feeding confidential discovery materials into platforms that allow data training.

The pattern emerging: if you’re a represented party who used a consumer chatbot on your own, you’re exposed. If you’re representing yourself in a civil case, you might have more cover. That distinction is now one of the sharpest fault lines in US evidence law.

It Gets Worse: The Krafton Case

There’s another case that shows just how exposed you are. Korean video game publisher Krafton’s CEO Changhan Kim used ChatGPT to plan how to avoid paying promised earnout payments to a company Krafton had acquired. He used the chatbot to craft a strategy for ousting the executives who were owed the money.

He deleted the conversations.

The court recovered them anyway. A judge reviewed the chats and reinstated the executives Kim had tried to push out. His own AI conversations became the evidence that undid his plan.

Deleting your chats doesn’t delete the data. If a court compels OpenAI or Anthropic to produce your conversations, they can.

What The Privacy Policies Actually Say

This is the part most people skip. Both OpenAI and Anthropic’s privacy policies explicitly state that they can share user data with third parties and that users should not expect privacy in their chatbot inputs.

Read that again. The companies themselves tell you it’s not private. Nobody reads the terms.

Law Firms Are Scrambling

An employee on a laptop in a glass-walled office while colleagues watch from outside — the illusion of privacy in enterprise AI

More than a dozen major US law firms have published client advisories since Rakoff’s ruling. The advice ranges from cautious to alarming:

Sher Tremonte added a clause to client contracts stating that sharing privileged communications with an AI platform could waive attorney-client privilege entirely.

O’Melveny & Myers told clients to use only enterprise-grade AI systems with contractual confidentiality protections — while acknowledging that even enterprise AI remains untested in court.

Debevoise & Plimpton went tactical: if a lawyer directs you to use an AI tool, type “I am doing this research at the direction of counsel for X litigation” into the prompt itself. The idea is to invoke the Kovel doctrine, which can extend privilege to non-lawyers working as an attorney’s agent.

The Numbers

MetricValue
Documents Heppner ordered to surrender31
Law firms issuing AI chat warnings12+
AI citation sanctions in Q1 2026$145,000+
Conflicting federal rulings3 (NY, MI, CO)
Privacy policies allowing data sharingBoth OpenAI and Anthropic

What You Should Actually Do

The legal advice from attorneys across the country boils down to one old-fashioned rule: do not discuss your case with anyone except your lawyer. That now includes the chatbot on your screen.

More specifically: don’t use consumer AI (ChatGPT, Claude, Gemini) for anything related to legal matters, active disputes, or sensitive business decisions unless your lawyer explicitly directs you to and you’re using enterprise-grade tools with contractual protections.

And if you think deleting the chat protects you — it doesn’t. The data exists on servers you don’t control, governed by privacy policies you didn’t read, accessible to courts you can’t avoid.

Sources


If this interested you:

Is ChatGPT Sharing Your Data With the Military?

10 Minutes of AI Made People Dumber — and They Didn’t Even Notice

OpenAI Burned $1M/Day on Sora — and Still Failed


Follow Synvoya for daily AI news summaries — quick reads, no fluff.

AI Privacy Legal ChatGPT Claude Court