Understanding the Claude Product Landscape
Anthropic offers several ways to use Claude. Each has different privacy characteristics. Understanding which product you are using — and which terms govern it — is the essential first step.
Claude.ai (Website)
The browser-based chat interface at claude.ai. You type prompts, upload files, and receive responses. All processing happens on Anthropic’s servers.
Consumer Terms (Free/Pro/Max)Commercial Terms (Team/Enterprise)
Claude Desktop App
A native application for Mac and Windows. Despite running locally, all prompts and files are sent to Anthropic’s servers for processing. The app is a convenience wrapper — not a local AI.
Same terms as your claude.ai planClaude Code (CLI)
A command-line tool for agentic coding that can read, write, and execute code on your machine. It sends your code and prompts to Anthropic’s servers. Data policies depend on your account type.
Terms vary by planClaude for Work (Team/Enterprise)
Business plans accessed through the same claude.ai interface but governed by Anthropic’s Commercial Terms. These provide significantly stronger data protections than consumer plans.
Commercial TermsYour Ethical Obligations (ABA Formal Opinion 512)
In July 2024, the ABA Standing Committee on Ethics and Professional Responsibility published Formal Opinion 512, its first formal guidance on lawyers’ use of generative AI. The opinion maps existing Model Rules to AI tools and creates specific obligations around data handling.
In practical terms, Opinion 512 requires you to answer three questions before using any Claude product with client data:
- Can the AI provider access or retain my client’s information?
- Could my client’s information be used to train AI models that serve other users?
- Have I implemented adequate safeguards to prevent unauthorized disclosure?
The answers depend entirely on which Claude plan tier you are using.
Plan Tiers: A Risk-Based Comparison
| Consideration | Free Pro / Max | Team Enterprise |
|---|---|---|
| Governing terms | Consumer Terms of Service | Commercial Terms of Service |
| Data used for AI training? | Yes, by default (opt-out available) | Never (regardless of settings) |
| Data retention | 30 days (opt-out) or 5 years (opt-in) | Configurable; API: 7 days default |
| Zero Data Retention option? | No | Available (API/Enterprise) |
| Employee access to conversations | Trust & Safety team on flagged content | Trust & Safety only; Primary Owner controls |
| SSO / SAML integration | Not available | Enterprise: Yes |
| Custom retention controls | No | Enterprise: Yes (min 30 days) |
| Data Processing Addendum (DPA) | Not available | Available |
| BAA (HIPAA) available? | No | Enterprise: Yes |
| Safe for confidential client files? | Not recommended | Yes, with proper configuration |
Decision Flowchart: Can I Use Claude for This Client File?
Handling Confidential Client Files in Claude
When you upload a document to Claude — whether through the website, the desktop app, or Claude Code — the file is transmitted to Anthropic’s servers for processing. Understanding this data flow is essential.
How File Processing Works
What Many People Assume
“The file stays on my computer and Claude reads it locally.”
“The Desktop app processes everything on my machine.”
“Only the text I type gets sent to Anthropic.”
What Actually Happens
The entire file is uploaded to Anthropic’s servers for the AI to process.
The Desktop app is a wrapper — all processing happens in Anthropic’s cloud.
Files, images, and all context in the conversation are transmitted.
Rules for Client Files
On a Claude for Work (Team/Enterprise) Account:
- Confirm your account is governed by Anthropic’s Commercial Terms (check with your firm administrator or account Primary Owner)
- Verify that the “Help improve Claude” setting is off at the organization level (Enterprise admins control this)
- Review Anthropic’s subprocessor list and Data Processing Addendum
- Before uploading, consider: does this file need to be uploaded in full, or can you redact identifying details first?
- Use Claude Projects to isolate client matters — do not mix clients in a single conversation
- Delete conversations containing client files after you have extracted the analysis you need
- Document your use of AI in the matter file, including what was uploaded and when
Claude Code with Client Source Material:
- Claude Code reads files from your local filesystem and sends their content to Anthropic’s servers
- If using a consumer plan (Free/Pro/Max), Claude Code data follows consumer retention and training policies
- If using a commercial API key, Claude Code data follows commercial terms (no training, shorter retention)
- Check which API key Claude Code is using: run
/configin the CLI to verify - For maximum protection, use Claude Code with a commercial API key that has Zero Data Retention enabled
If You Must Use a Consumer Plan: Minimum Safety Settings
If your firm is evaluating Claude before committing to a commercial plan, or if individual attorneys use personal accounts for non-client work, these settings are the absolute minimum configuration required.
Step 1: Disable Model Training
Navigate to:
claude.ai → Settings → Privacy & Data Controls
Set the following:
"Help improve Claude" → OFF
This prevents your conversations from being used
to train future AI models. Without this step, data
retention extends from 30 days to 5 YEARS.
Step 2: Disable Memory (If Available)
Navigate to:
claude.ai → Settings → Memory
Set:
Memory → OFF
Memory stores information about you across conversations.
If you discuss client matters in one conversation, that
context could surface in another.
Step 3: Delete Conversations After Use
Deleted conversations are removed from your history immediately and from Anthropic’s backend systems within 30 days. Deleted chats are excluded from training regardless of your settings. Make it a habit to delete any conversation that contained sensitive information.
Step 4: Review the Claude Desktop App
The Desktop app inherits the privacy settings of your claude.ai account. There are no separate privacy controls in the app itself. Ensure your web account settings are configured before using the desktop app.
Recommended Configuration: Claude for Work (Commercial Plans)
For any law firm handling confidential client information, Anthropic’s commercial plans are the appropriate choice. These are governed by separate Commercial Terms of Service that explicitly prohibit model training on your data.
Enterprise Plan Security Features
| Feature | What It Does | Why Lawyers Need It |
|---|---|---|
| SSO/SAML | Integrates with your firm’s identity provider (Okta, Azure AD, etc.) | Ensures only authorized personnel access Claude; enables central deprovisioning |
| SCIM Provisioning | Automatically adds/removes users based on your directory | Prevents former employees from retaining access |
| Custom Retention | Set organization-wide data retention periods (min 30 days) | Align with your firm’s document retention policy |
| Data Processing Addendum | Contractual commitment on data handling | Required to demonstrate “reasonable efforts” under Rule 1.6 |
| Admin Console | Central control over user settings, feedback, and data exports | Supervisory obligation under Rules 5.1/5.3 |
| Audit Logging | Records of who accessed what and when | Evidence of ongoing security monitoring; eDiscovery readiness |
API with Zero Data Retention
For the highest level of protection, Anthropic offers a Zero Data Retention (ZDR) addendum for eligible API customers. Under ZDR, Anthropic does not store your inputs or outputs except where required by law or to combat misuse. This is the gold standard for firms handling highly sensitive matters.
Claude Code: Special Considerations
Claude Code is a command-line tool that operates directly on your filesystem. It can read files, write code, execute commands, and interact with your development environment. This creates unique risks in a law firm context.
What Claude Code Can Access
- Any file in the working directory (and subdirectories) that you grant it access to
- Code repositories, configuration files, and environment variables
- Terminal output and system information
- All of this content is sent to Anthropic’s servers for processing
Data Policies by Account Type
| Account Type | Training? | Retention |
|---|---|---|
| Free Pro / Max | Default ON (opt-out available) | 30 days (opt-out) / 5 years (opt-in) |
| Team Enterprise | Never | Configurable by admin |
| Commercial API Key | Never | 7 days (standard) / 0 days (ZDR) |
/config in the Claude Code CLI.
Preventing Accidental Exposure
# Create a .claudeignore file in your project root
# (similar to .gitignore) to exclude sensitive files:
# Client documents
/client-files/
*.pdf
*.docx
# Credentials and secrets
.env
*.pem
*.key
# Database files
*.sqlite
*.db
Developing Your Firm’s AI Usage Policy
ABA Formal Opinion 512 and Model Rules 5.1/5.3 require firms to establish clear policies governing AI use. Your policy should address, at minimum, the following areas:
1. Approved Tools and Plans
Specify which Claude products and plan tiers are approved for firm use. Consumer plans should be restricted to non-client research only. Maintain a list of approved AI tools and review it quarterly.
2. Data Classification
Define categories of information and which can be used with AI tools:
| Category | Examples | Claude Usage |
|---|---|---|
| Public | Published case law, statutes, public filings | Any plan |
| Internal | Firm templates, administrative documents, CLE notes | Commercial plan preferred |
| Confidential | Client communications, draft pleadings, contracts under review | Commercial plan only; client consent required |
| Highly Sensitive | M&A materials, trade secrets, medical records, sealed documents | Enterprise + ZDR only; specific written consent |
3. Client Disclosure and Consent
Draft a specific AI disclosure for engagement letters that goes beyond boilerplate language. Opinion 512 requires clients to understand how AI tools will be used in their matter, including what information may be processed by third-party AI services. Keep signed consent forms in the client file.
4. Training and Supervision
All attorneys and staff who use Claude must complete training covering: the difference between consumer and commercial plans, what information may and may not be entered, how to verify output accuracy, and how to report suspected data exposure. Document completion and refresh annually.
5. Incident Response
If confidential client data is inadvertently uploaded to a consumer Claude account:
- Delete the conversation immediately (removed from backend within 30 days)
- Document the incident, including what was uploaded and the account type
- Assess whether the “Help improve Claude” setting was on — if so, data may already be in training pipelines
- Contact Anthropic support to request data deletion if the setting was on
- Evaluate notification obligations under your jurisdiction’s rules
- Review whether client notification is required under Model Rule 1.4
Security Checklists
Print and complete these checklists with your IT administrator or developer. Retain the completed versions as documentation of your “reasonable efforts” under Model Rule 1.6.
Firm-Wide Configuration
- Determined which Claude plan(s) are in use across the firm
- Confirmed that accounts used for client work are on Commercial Terms (Team/Enterprise)
- Verified that “Help improve Claude” training toggle is OFF for all accounts
- Reviewed Anthropic’s Commercial Terms, Privacy Policy, and Data Processing Addendum
- Written AI usage policy is in place and distributed to all personnel
- Client consent language for AI use has been drafted and approved
- AI usage training has been completed by all attorneys and staff
- Incident response procedure for accidental data exposure is documented
- Quarterly review of AI tools and policies is scheduled
Individual Account Settings (Consumer Plans — Non-Client Work Only)
- “Help improve Claude” is toggled OFF in Settings → Privacy
- Memory is OFF (or managed carefully)
- Conversations containing any sensitive content are deleted after use
- No client files have been uploaded to this account
- Two-factor authentication is enabled on the account
Claude Code Configuration
- Verified API key type in use (
/config) — commercial key for client-adjacent work - Created
.claudeignoreto exclude client files and credentials - Environment variables and
.envfiles are excluded from Claude Code’s context - Claude Code is not run in directories containing unrelated client files
- Staff understand that Claude Code sends file contents to Anthropic’s servers
Enterprise Plan Administration
- SSO/SAML is configured with the firm’s identity provider
- SCIM provisioning is active for automated user management
- Custom data retention period is set to align with firm retention policy
- Organization-wide training opt-out is confirmed in admin console
- User feedback settings are configured (review before enabling)
- Data export capabilities have been tested for eDiscovery readiness
- DPA is signed and on file
Resources & References
ABA Ethics Guidance
Formal Opinion 08-451 — Outsourcing Legal and Nonlegal Support Services
Formal Opinion 93-379 — Billing for Professional Fees, Disbursements and Other Expenses
Anthropic Documentation
Data Retention: How long does Anthropic store data?
Zero Data Retention: Which products does it apply to?
How Anthropic protects personal data
Claude Code: Data usage documentation
Commercial Terms of Service
Claude for Work: Data ownership and management