If you've started using AI tools in your practice — or you're thinking about it — you've probably heard the phrase "zero data retention" thrown around. You may have also wondered whether it actually matters, and whether using Claude or ChatGPT puts you sideways with Model Rule 1.6.
The short answer: it depends on how you've set things up, and most attorneys are running a configuration they haven't fully thought through. Here's what you actually need to know.
What Rule 1.6 Requires in the AI Context
Model Rule 1.6 requires lawyers to make "reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation." That language — "reasonable efforts" — is doing a lot of work, and the ABA gave it significant texture in Formal Opinion 512 (July 2024).
Opinion 512 makes clear that before inputting client information into any AI tool, you must evaluate the risks that the information will be disclosed to or accessed by others. That evaluation is fact-specific: it depends on the sensitivity of the matter, the client, the task, and which AI tool you're using. There's no blanket answer. But there are better and worse configurations — and understanding the difference is itself part of your competence obligation under Rule 1.1.
The Three Retention Tiers You Need to Understand
Not all AI usage is equal from a confidentiality standpoint. With Anthropic's Claude as an example, there are three meaningfully different configurations:
| Configuration | Retention | Used for Training? | Rule 1.6 Risk |
|---|---|---|---|
| Consumer plans (Free, Pro, Max) | 30 days default; up to 5 years if opted in | Yes, by default | High |
| Commercial API (standard, no addendum) | 7 days | Never | Moderate |
| Commercial API + ZDR addendum | Immediately discarded | Never | Lowest available |
Common Misconception Using the Anthropic API, Claude Desktop, or Claude Code does not automatically give you ZDR. ZDR is a separate contractual arrangement that must be specifically negotiated with Anthropic's sales team. Without the signed addendum, you're on standard 7-day retention.
Why the Seven-Day Window Is a Real Legal Risk
For many practice areas, seven days may be defensible. But consider the scenarios that actually matter to litigators and transactional lawyers:
Subpoenas and legal process. During the seven-day window, client data processed through the AI exists on Anthropic's servers. Opposing counsel could theoretically serve a third-party subpoena on Anthropic for your API logs — arguing they contain relevant evidence about how work product was prepared. Under ZDR, there is nothing to produce because nothing exists.
Government access. Data sitting on a third-party server is potentially reachable via court order, search warrant, grand jury subpoena, or national security letter directed at Anthropic — not at your firm. That creates complications around privilege assertions you don't control.
Infrastructure breach. Enterprise-grade infrastructure with encryption at rest dramatically reduces this risk, but low probability is not zero probability. Under ZDR, a breach yields nothing.
The Core Principle Data that does not exist cannot be subpoenaed, breached, or accessed by anyone. For high-stakes litigation, M&A work, or any matter where the fact that AI was used could itself become an issue, ZDR eliminates a concrete exposure that standard API retention does not.
The Interface Gap: Why This Matters Practically
Here's where most attorneys hit a wall. The claude.ai web interface and desktop app — the easy, intuitive products — do not support ZDR at all, regardless of your plan tier. Even if you've negotiated a ZDR addendum with Anthropic, that coverage only applies to direct API calls, not the chat interface.
This means a law firm that wants ZDR protection has three practical paths:
- Local AI. Run an open-source model entirely on your own hardware via something like OpenClaw with Ollama. Nothing leaves your machine. The tradeoff is that local models aren't yet at frontier-model quality for complex legal reasoning, and you need capable hardware.
- Custom application. A developer builds a simple interface that calls the API with your ZDR-enabled key. You get a familiar chat experience while every request is covered by ZDR.
- OpenClaw or similar platform. A configurable AI assistant that routes requests through your commercial API key with ZDR coverage — giving you a conversational interface with skills tailored to legal work, without the chat interface's retention problem.
What ZDR Doesn't Fix
ZDR is the strongest technical safeguard available, but it doesn't complete your Rule 1.6 analysis on its own. ABA Opinion 512 is clear that compliance requires a holistic approach:
- ZDR handles the confidentiality technology problem, but informed consent (Rule 1.4) still requires disclosure to clients that AI tools are being used — even when data isn't retained.
- ZDR doesn't create an audit trail — in fact, it eliminates one. Firms with supervisory obligations (Rules 5.1/5.3) need to build their own logging at the application layer if they need records of what was submitted.
- ZDR doesn't verify AI output accuracy. Your competence obligation under Rule 1.1 still requires independent review of anything the AI produces.
How to Actually Get ZDR
ZDR is not a toggle in a settings panel. You contact Anthropic's sales team, identify yourself as a law firm, state that you need ZDR for client confidentiality under ABA ethics obligations, and negotiate a ZDR addendum that supplements your commercial terms. You also separately execute a Data Processing Addendum (DPA). The process isn't instant, but regulated industries are the primary target audience and firms with meaningful API usage generally qualify.
The legal profession's AI adoption is accelerating fast. Opinion 512 sets the framework, and the technology exists to satisfy it — but only if you've deliberately configured things that way. Understanding the difference between consumer plans, standard API retention, and ZDR isn't optional for attorneys using AI on client matters. It's the bare minimum of technological competence that Rule 1.1 now requires.
Not Legal AdviceThis article explains technical concepts and summarizes publicly available ethics guidance for informational purposes. LawTasksAI is a software product, not a law firm. Nothing here should be read as legal counsel or relied upon as a substitute for advice from a licensed attorney in your jurisdiction.
LawTasksAI is built for Rule 1.6 compliance
A thin-client architecture that never receives your documents or client data — designed to work within a properly configured, privacy-first AI setup.
See How It Works →