Menu

Can You Use ChatGPT to Talk About Your Legal Case? A Federal Court Weighs In.

Category: ArticlesMedical MalpracticePersonal Injury Tags: AIAI ChatbotChatGPTHeppner RulingKovel DoctrineLegal Technologymedical malpracticepersonal injuryPlaintiff's Litigation
ChatGPT Article

If you are under investigation, facing a lawsuit, or thinking about hiring a lawyer, you have probably wondered whether you can use ChatGPT, Claude, or another AI chatbot to help you think through your case. A federal judge has now weighed in on that question, and the answer is one that anyone using AI to work through a legal problem should understand.

In short: at least one federal court has held that conversations with a public AI chatbot may not be private or privileged, and may be discoverable in litigation. The law in this area is still developing, and other courts may reach different conclusions, but the ruling is significant enough that it should change how you think about AI and your legal case.

The Ruling

In February 2026, Judge Jed Rakoff of the U.S. District Court for the Southern District of New York decided United States v. Heppner. It appears to be the first federal ruling in the country directly addressing whether conversations with a generative AI chatbot are protected by attorney-client privilege or the work product doctrine.

The facts were straightforward. A former executive facing federal fraud charges used a popular consumer AI chatbot (Claude) to work through aspects of his case. He typed in facts. He typed in legal theories. He typed in information his own lawyers had shared with him. He generated roughly thirty documents analyzing his potential defense, and he later shared them with his legal team.

When the FBI arrested him, agents seized those documents from his home. His lawyers asserted attorney-client privilege and work product protection. The court rejected both claims on the specific facts of that case, holding that the documents were not privileged and could be reviewed by the government.

It is worth emphasizing that Heppner is one trial court decision, on a particular set of facts, involving a specific consumer AI product and a specific privacy policy. It is not binding outside the Southern District of New York, and other courts considering similar issues may analyze them differently. The law on AI and privilege is genuinely unsettled, and clients and lawyers should expect it to evolve.

The Court’s Reasoning on Attorney-Client Privilege

Attorney-client privilege protects confidential communications between a client and a lawyer made for the purpose of obtaining legal advice. The court in Heppner applied the standard three-part privilege test and concluded that the AI-generated documents in that case did not satisfy at least two, and possibly all three, of its elements.

First, the court reasoned that the AI chatbot is not an attorney. Privilege traditionally requires communication with a licensed attorney who owes fiduciary duties to the client. The court observed that a chatbot has no law license, no duty of loyalty, and is not subject to professional discipline. On that view, communications with an AI tool are not communications with counsel, and the privilege does not attach.

Second, the court found that the communications were not confidential on the facts before it. The defendant had used the consumer version of the platform, whose privacy policy at the time permitted the provider to collect user inputs and outputs, use them to train the model, and disclose them to third parties — including governmental authorities — without a subpoena. The court concluded that, under that policy, the defendant could not have had a reasonable expectation of confidentiality. This element of the ruling is closely tied to the specific terms of service of one consumer product, and the analysis could come out differently for a platform with different contractual protections.

Third, the court was not persuaded the defendant was seeking legal advice from the AI itself. The chatbot in question expressly disclaims providing legal advice, and the defendant had used it on his own initiative rather than at his attorneys’ direction. The court noted, in dicta, that the analysis might have been different if counsel had directed the use.

Three Things to Be Aware Of

Risk 1: Using AI to “organize your thoughts” before meeting with your lawyer.

The defendant in Heppner made essentially this argument, and the court was not persuaded. Once facts about a case are typed into a chatbot, they exist as a written record that may be subject to discovery, regardless of the user’s subjective intent at the time. A future court could see this differently, but Heppner suggests this is a real risk worth taking seriously.

Risk 2: Assuming that sharing an AI conversation with your attorney makes it privileged after the fact.

Generally speaking, materials that are not privileged at the time they are created do not become privileged simply because they are later forwarded to counsel. This is a longstanding principle of privilege law, not unique to AI, and it is one of the grounds the court relied on in declining to protect the documents at issue. There may be circumstances where a different analysis applies, but clients should not assume that forwarding an AI conversation to their lawyer will retroactively cloak it in privilege.

Risk 3: Typing your lawyer’s advice into an AI tool.

This may be the most consequential risk of all. When confidential information from your attorney is shared with a third-party platform, it can potentially waive privilege over the underlying attorney-client communication itself. The court in Heppner noted this concern directly. Whether a particular disclosure results in waiver will depend on the facts and on the platform involved, but the safer course is to assume that confidential attorney communications should not be pasted into any AI tool without first consulting your lawyer.

Enterprise AI Tools: A Potentially Different Analysis

One important nuance from Heppner is that the court was specifically considering a defendant’s independent use of a consumer version of an AI chatbot, without direction from his attorneys. In particular, the court suggested that if a lawyer directs a client to use an AI tool as part of the legal representation, the AI might arguably function like a non-lawyer agent of counsel — a category that has historically received privilege protection under the Kovel doctrine.

This is not a guarantee. No appellate court has yet endorsed an enterprise-AI exception, and the dicta in Heppner is hedged. But it suggests that two factors may matter in a future case:

Attorney direction. AI use that is initiated, designed, and supervised by counsel as part of the representation may be treated differently from AI use a client undertakes on their own. Documenting that direction in the file is potentially important.

The platform and its contractual terms. Enterprise-tier AI products — for example, Microsoft 365 Copilot deployed through a firm tenant, Claude for Enterprise, ChatGPT Enterprise, or legal-specific tools like Lexis+ AI or Westlaw’s CoCounsel — typically operate under contracts that prohibit the provider from training on user data and impose meaningful confidentiality and disclosure restrictions. Some are also available with a Business Associate Agreement under HIPAA. These contractual protections may matter to a court’s confidentiality analysis. The consumer versions of the same products generally are not governed by the same terms.

Even with both factors in place, privilege is not assured. The law is developing, and a court could reach a different conclusion. But these are the conditions under which an argument for protection is most likely to be available, and any decision to use AI in connection with a legal matter is one that should be made by counsel, not by the client alone.

Practical Suggestions

With all of the appropriate caveats about a developing area of law, the following are reasonable, conservative steps for anyone facing a legal matter:

Talk to a licensed attorney. An attorney-client conversation remains the most reliably protected channel for discussing a legal problem.

Be cautious about typing case-related information into public AI chatbots. Heppner suggests that doing so may create discoverable records and may, in some circumstances, affect privilege over attorney communications. Until the law develops further, the cautious course is to avoid it.

If you have already used AI to discuss your case, tell your attorney. Your lawyer needs to know what exists so they can evaluate any potential exposure and plan accordingly. This is not a moment for embarrassment — it is information your attorney needs.

Avoid sharing your attorney’s advice with third parties, including AI tools. Whether this results in waiver in a particular case will depend on the facts, but the safer course is to keep privileged communications confidential.

If you believe AI assistance would be useful to your case, ask your attorney. There may be a workflow available — potentially using an enterprise-grade tool under your lawyer’s direction — that allows for AI assistance while preserving the strongest possible privilege argument. That decision should be made by counsel based on the specific circumstances of your matter.

The Bottom Line

AI chatbots are powerful and increasingly useful tools for many tasks. Whether and how they fit into the discussion of a legal matter is a question the courts are only beginning to work through. United States v. Heppner is an early and significant data point, but it is one ruling, on one set of facts, and the law in this area will continue to develop. Other courts may reach different conclusions, and the analysis may turn on factors specific to the AI platform, the way it is used, and the role of counsel.

For now, the cautious approach is to treat conversations with public AI chatbots about legal matters as potentially discoverable and not necessarily privileged, and to involve a licensed attorney early in any decision about whether and how AI should play a role in your case. The privilege that protects communications with your lawyer is one of the most valuable protections in the legal system, and it is worth being careful with.

If you have questions about how AI use may affect a legal matter you are involved in, the right next step is a conversation with an attorney who can evaluate your specific situation.


This article is for general informational purposes only and does not constitute legal advice. The law concerning AI and attorney-client privilege is developing, and the principles discussed here may evolve as additional courts address these issues. If you have a legal question, please contact a licensed attorney to discuss your specific situation.

Additional Resources

Practices & Specialties

Similar Articles

These articles are provided for general informational purposes only and are marketing publications of Gentry Locke. They do not constitute legal advice or a legal opinion on any specific facts or circumstances. You are urged to consult your own lawyer concerning your situation and specific legal questions you may have.
FacebookTwitterLinkedIn
Gentry Locke Attorneys
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.