In a decision that impacts both attorneys and non-attorneys using artificial intelligence tools to analyze legal matters, Judge Jed S. Rakoff of the U.S. District Court for the Southern District of New York recently ruled in U.S. v. Heppner that a criminal defendant’s communications about legal strategy with Anthropic’s large language model (LLM), Claude, were not protected against government inspection under attorney-client privilege or the work-product doctrine.
The government sought to examine over 30 communications between criminal defendant Benjamin Heppner, who is charged with securities fraud and related offenses, and the Claude AI platform. Without the involvement of any lawyers, Heppner used Claude to outline his potential legal defense strategy. Heppner claimed that the resulting documents were protected against disclosure to the government because they included information he had learned from counsel, were created to facilitate obtaining legal advice from counsel, and were later shared with counsel.
Judge Rakoff rejected that argument, holding that the government could examine the communications. Judge Rakoff stated that attorney-client privilege applies only to communications (i) between a client and an attorney, (ii) intended to be confidential, and (iii) made for the purpose of obtaining or providing legal advice. Judge Rakoff then held that Heppner’s communications with Claude met none of these conditions. First, he observed that Claude is not an attorney, so by extension Heppner’s communications with Claude were not communications between client and attorney. Second, Heppner could not reasonably have expected confidentiality, as Anthropic’s then-effective privacy policy made clear that it collected user data and could disclose that data to third parties, including governmental regulatory authorities. Third, the communications were not for the purpose of obtaining legal advice because Heppner did not use Claude at the direction of any lawyer. Judge Rakoff observed that “even if certain information that Heppner input into Claude was privileged, he waived the privilege by sharing that information with Claude and Anthropic.”
Separately, Judge Rakoff concluded that Heppner’s use of Claude was not protected by the work product doctrine, which he construed to apply only to materials prepared by or at the behest of counsel in anticipation of (or for use in) litigation. Because Heppner’s use of Claude was not at counsel’s direction, he found it was not subject to work product protection.
When Heppner, a non-lawyer, independently developed legal strategy through a public AI model whose terms of service expressly disavowed confidentiality, his communications were discoverable in litigation. Though the law regarding the use of AI tools for traditionally privileged work is unsettled, companies may consider the below measures to help mitigate some of the risks identified by Heppner (and to help strengthen their arguments that AI interactions in furtherance of legal work are privileged):
1. For any legal work, use secure, private AI tools with enterprise agreements and policies that safeguard confidentiality.
2. Ensure that lawyers direct and supervise the use of AI for legal purposes.
3. Protect the confidentiality of AI prompts and outputs related to legal work.
4. Ensure that employees do not share any privileged information or communications with a public AI platform, including by educating employees about the attendant risk of privilege waiver.
Many emerging applications of AI in the legal field raise novel challenges that courts will still need to address post-Heppner. Courts may continue to look for parallels in existing privilege law to analyze such questions. Companies and their counsel should be attendant to the risks in proceeding in these unresolved areas, which may include: