It's Not the Output. It's the Input.
What United States v. Heppner means for in-house teams using AI in the US, UK, and EU
Field Note # 6
What you'll learn:
What United States v. Heppner actually held and what the judge left open
Why the waiver risk is about what you feed in, not what comes out
How to structure an in-house setup that takes the doctrine seriously
Heppner: Claude’s use by a defendant in a criminal case
A few weeks ago, United States v. Heppner from the Southern District of New York was all over LinkedIn because it addressed privilege and AI use in a criminal case. Most of the coverage missed the point. In Heppner, Judge Rakoff ruled that privilege did not protect the defendant's conversations with consumer Claude. Three privilege arguments were raised: attorney-client privilege, work product, and a third. All three failed. But the holding is narrower than the coverage suggests.
Heppner used Claude himself, without his attorney's direction, under Anthropic's consumer privacy policy. That policy explicitly permitted disclosure to government authorities and use of inputs for model training. That is the fact pattern the court ruled on.
Consumer Claude, used unilaterally by a defendant, under a policy that said Anthropic could share what you told it. That's a specific and limited set of facts. The opinion is useful not for this holding but for what it left open.
What the judge left open
Judge Rakoff wrote that had counsel directed Heppner to use Claude, the analysis might look different. Claude might arguably function in a manner akin to a professional operating as a lawyer's agent within the protection of privilege. He cited United States v. Kovel, 296 F.2d 918 (2d Cir. 1961).
Kovel is the foundational case for extending privilege to necessary third parties. The Second Circuit held that an accountant employed by a law firm didn't break privilege because he was functionally necessary to the legal work - translating complex financials so the lawyer could give legal advice. Courts have applied the same logic to e-discovery vendors, outside consultants, and cloud storage providers.
Rakoff didn't close the door on applying it to AI. He described the conditions under which it stays open: counsel direction plus confidentiality. He reserved the enterprise scenario rather than ruling on it.
That's dictum, not a holding. It's not binding anywhere. But it's the most direct judicial statement we have on where the argument lives. And it maps cleanly onto the enterprise deployment framework: proper DPA, no-training commitments, controlled environment, used as a tool under attorney supervision and privilege may be maintained.
Footnote 3: the part nobody is talking about
The most important line in the opinion is in a footnote, which addresses the waiver question directly: even if certain information Heppner input into Claude was privileged, he waived the privilege by sharing that information with Claude and Anthropic, just as if he had shared it with any other third party.
The waiver isn't caused by the AI output. It's caused by the input.
The mechanism of waiver is disclosure to Anthropic as a corporate third party, under a privacy policy permitting further disclosure. This reframes the whole question. Not "is my AI-assisted analysis privileged?" The right question is: what did I put into the tool to produce it, and was that itself privileged?
An enterprise agreement with a proper DPA and no-disclosure commitments addresses this mechanism directly. You're no longer sharing with a third party under a policy permitting onward disclosure. You're sharing with a processor bound by confidentiality obligations, operating under counsel direction, within a controlled environment. That's a materially different legal posture.
Where Cowork changes the analysis
Everything above applies to AI tools that process documents and generate outputs. Claude Cowork is a different category.
Cowork uses computer control. It takes screenshots of your screen to understand how to navigate apps and complete tasks. It can see anything visible on your screen or in the apps you've granted access to: privileged documents, client communications, open browser tabs, anything.
I went to Anthropic's support pages to understand what this means in practice. Cowork activity is not captured in audit logs, the Compliance API, or data exports. Anthropic explicitly warns against using it for legal documents or contracts and flags it as unsuitable for regulated workloads. Anthropic is telling you not to use this on privileged materials.
Now apply the Heppner framework to Cowork. The court was explicit: sharing information with Anthropic as a corporate third party, under a privacy policy permitting disclosure, is a waiver event. An enterprise agreement with a proper DPA addresses that mechanism.
But Cowork operates without audit logs. That's a separate problem. It doesn't cause waiver — but it means you cannot prove who saw what and when. A significant part of what makes cloud storage defensible in privilege disputes is the ability to demonstrate controlled access. Cowork doesn't give you that. If you ever have to defend privilege over materials a Cowork session touched, that gap is genuinely hard to argue around.
What the UK and EU picture looks like right now
The UK
There is no direct UK equivalent of Heppner yet, but two things happened recently that point in the same direction.
In UK v Secretary of State for the Home Department [2026] UKUT 81 (IAC), the Upper Tribunal expressed the view that uploading confidential client material to a public AI platform could amount to placing it in the public domain, thereby breaching confidentiality and waiving privilege. That's not a holding on the facts, but it's the clearest judicial signal English courts have given on the question.
The tribunal's approach is also consistent with HMCTS guidance for judicial office holders on AI, released in October 2025, which stated that you should treat all public AI tools as being capable of making public anything entered into them.
The doctrinal framework is different but the practical outcome is similar. Under English law, legal advice generated by an AI system and provided directly to a non-lawyer is not capable of being privileged — the AI system is not a lawyer, and it would be for Parliament to extend the scope of legal advice privilege beyond legal professionals. Rakoff reached the same conclusion via different doctrine.
The Kovel analogy exists in English law too. If counsel directed use of the AI system, it might arguably function in a manner akin to a highly trained professional acting as a lawyer's agent within the protection of privilege. But it hasn't been tested there either.
One important distinction: confidentiality and privacy are not treated as interchangeable under English law, which means the analysis of what breaks privilege under a UK framework isn't identical to the US one, even if the conclusion looks the same.
There will almost certainly be a similar case in England and Wales in due course. Right now Munir gives an indication of judicial thinking, but most questions remain open.
The EU
There's no pan-European privilege doctrine equivalent to attorney-client privilege or LPP. It's fragmented by national law, and legal professional secrecy rules vary significantly across member states. The EU AI Act doesn't directly address privilege. No European court has issued anything close to Heppner yet.
What the EU does add is GDPR. Feeding privileged materials into a consumer AI tool raises data protection issues that overlap with but are separate from the privilege question. A proper DPA with data processing restrictions matters both for privilege analysis and for GDPR compliance — which gives the enterprise/no-training-commitments argument double weight in EU jurisdictions.
The US, UK, and EU are converging on the same practical conclusion through different frameworks: consumer public AI tools break confidentiality, which breaks privilege. The enterprise deployment argument is the same across jurisdictions. No court anywhere has tested it yet, which means the fact pattern you're trying to stay on the right side of is the same everywhere.
What a defensible setup looks like
This is where I'll stop theorizing (reminder here that I a California attorney and the above does not constitute legal advice) and tell you what I'm actually doing.
At Worksome, all documents and folders carry data classification labels as part of our SOC2 compliance infrastructure. That structure does double duty here.
My current setup for Cowork:
Privileged and sensitive materials stay in folders Cowork cannot access, classified and excluded by design, not by habit
Sensitive apps are closed before running computer control
Cowork is used on operational work: scheduling, file organization, drafting — not on anything that touches client data, our IP or PII
That's the minimum. It's not a complete answer to the privilege question. It's a risk management posture built on what the doctrine currently supports and what the tools currently allow.
The enterprise/DPA argument may hold. Neither has been tested in the scenario that actually matters: a well-documented, counsel-directed, enterprise-deployed AI agent used by an in-house legal team. That case hasn't been decided yet.nWhen it is, you want to be on the right side of the fact pattern.
This is not legal advice. I'm a GC thinking out loud about tools I'm actually using. If you're making decisions about privilege and AI in your organization, talk to someone who can look at your specific setup and is qualified in your jurisdiction.
Speak soon!
Here’s some Sade to take you into Easter Week: