Law Firm AI Security Guide — February 2026

Securing Anthropic Claude
for Your Legal Practice

A comprehensive guide to data privacy, client confidentiality, and security configuration across Claude.ai, Claude Desktop, and Claude Code — with specific guidance on handling confidential client files.

!
Not all Claude plans protect client data equally. As of September 2025, consumer plans (Free, Pro, Max) may use your conversations to train AI models by default. Before uploading any confidential client files to any Claude product, confirm you are on the correct plan tier and your privacy settings are properly configured. This guide explains exactly how.

Understanding the Claude Product Landscape

Anthropic offers several ways to use Claude. Each has different privacy characteristics. Understanding which product you are using — and which terms govern it — is the essential first step.

Claude.ai (Website)

The browser-based chat interface at claude.ai. You type prompts, upload files, and receive responses. All processing happens on Anthropic’s servers.

Consumer Terms (Free/Pro/Max)
Commercial Terms (Team/Enterprise)

Claude Desktop App

A native application for Mac and Windows. Despite running locally, all prompts and files are sent to Anthropic’s servers for processing. The app is a convenience wrapper — not a local AI.

Same terms as your claude.ai plan

Claude Code (CLI)

A command-line tool for agentic coding that can read, write, and execute code on your machine. It sends your code and prompts to Anthropic’s servers. Data policies depend on your account type.

Terms vary by plan

Claude for Work (Team/Enterprise)

Business plans accessed through the same claude.ai interface but governed by Anthropic’s Commercial Terms. These provide significantly stronger data protections than consumer plans.

Commercial Terms
Critical Distinction The Claude Desktop app does NOT process data locally. Everything you type or upload is sent to Anthropic’s cloud servers. The “desktop” label may suggest local processing, but it is functionally identical to using claude.ai in a browser. Treat it with the same caution.

Your Ethical Obligations (ABA Formal Opinion 512)

In July 2024, the ABA Standing Committee on Ethics and Professional Responsibility published Formal Opinion 512, its first formal guidance on lawyers’ use of generative AI. The opinion maps existing Model Rules to AI tools and creates specific obligations around data handling.

Model Rule 1.6 — Confidentiality
Before entering any client information into an AI tool, you must evaluate the risk of disclosure to others both inside and outside your firm. This includes understanding whether the tool “learns” from inputs and could surface client information in responses to other users.   Read the ABA announcement →
Model Rule 1.1 — Competence
You must have a reasonable understanding of the capabilities and limitations of AI tools you use — including how they handle data. “Boilerplate consent in engagement letters will not be adequate.”
Model Rules 5.1 & 5.3 — Supervision
Managerial lawyers must establish clear firm-wide policies on AI use, provide training on ethical risks, and ensure that both lawyer and non-lawyer staff comply when using AI tools.

In practical terms, Opinion 512 requires you to answer three questions before using any Claude product with client data:

  1. Can the AI provider access or retain my client’s information?
  2. Could my client’s information be used to train AI models that serve other users?
  3. Have I implemented adequate safeguards to prevent unauthorized disclosure?

The answers depend entirely on which Claude plan tier you are using.

Plan Tiers: A Risk-Based Comparison

Warning — Misleading Plan Names The name “Pro” suggests professional or business-grade privacy. It does not. Claude Pro is a consumer plan. Similarly, some “Team” accounts may still fall under consumer terms depending on how they were provisioned. Always verify which Terms of Service govern your account.
Consideration Free Pro / Max Team Enterprise
Governing terms Consumer Terms of Service Commercial Terms of Service
Data used for AI training? Yes, by default (opt-out available) Never (regardless of settings)
Data retention 30 days (opt-out) or 5 years (opt-in) Configurable; API: 7 days default
Zero Data Retention option? No Available (API/Enterprise)
Employee access to conversations Trust & Safety team on flagged content Trust & Safety only; Primary Owner controls
SSO / SAML integration Not available Enterprise: Yes
Custom retention controls No Enterprise: Yes (min 30 days)
Data Processing Addendum (DPA) Not available Available
BAA (HIPAA) available? No Enterprise: Yes
Safe for confidential client files? Not recommended Yes, with proper configuration

Decision Flowchart: Can I Use Claude for This Client File?

Before Uploading Any Confidential Client Document
Q1
Does the file contain any client-identifiable information, privileged communications, or confidential case details?
No
You may use any Claude plan. General legal research, public document analysis, and de-identified data carry minimal risk. Proceed with normal caution.
Yes
Continue to Q2 ↓
Q2
Are you on a Claude for Work (Team/Enterprise) plan governed by Anthropic’s Commercial Terms?
Yes
Your data will not be used for training. Proceed to Q3.
No
STOP. Do not upload the file. Consumer plans (Free/Pro/Max) may use your data for model training. Even with opt-out, data is retained for 30 days and Anthropic’s Trust & Safety team may access flagged content. This is unlikely to satisfy your obligations under Model Rule 1.6.
Q3
Have you obtained informed consent from the client to use AI tools in their matter? (Opinion 512 says boilerplate engagement letter consent is not sufficient.)
Yes
You may proceed with the upload. Follow the file-handling procedures in the next section.
No
Obtain specific informed consent before proceeding. Document it in the client file.

Handling Confidential Client Files in Claude

When you upload a document to Claude — whether through the website, the desktop app, or Claude Code — the file is transmitted to Anthropic’s servers for processing. Understanding this data flow is essential.

How File Processing Works

What Many People Assume

“The file stays on my computer and Claude reads it locally.”

“The Desktop app processes everything on my machine.”

“Only the text I type gets sent to Anthropic.”

What Actually Happens

The entire file is uploaded to Anthropic’s servers for the AI to process.

The Desktop app is a wrapper — all processing happens in Anthropic’s cloud.

Files, images, and all context in the conversation are transmitted.

Rules for Client Files

Non-Negotiable Never upload client files to a Free, Pro, or Max account. These are consumer plans where data may be used for AI training and retained for up to 5 years. This applies regardless of whether you use the website, desktop app, or Claude Code logged into a consumer account.

On a Claude for Work (Team/Enterprise) Account:

Claude Code with Client Source Material:

If You Must Use a Consumer Plan: Minimum Safety Settings

If your firm is evaluating Claude before committing to a commercial plan, or if individual attorneys use personal accounts for non-client work, these settings are the absolute minimum configuration required.

Reminder Even with these settings, consumer plans are not recommended for confidential client information. These steps reduce risk for general legal research and non-client tasks only.

Step 1: Disable Model Training

Navigate to:
  claude.ai → Settings → Privacy & Data Controls

Set the following:
  "Help improve Claude"OFF

This prevents your conversations from being used
to train future AI models. Without this step, data
retention extends from 30 days to 5 YEARS.

Step 2: Disable Memory (If Available)

Navigate to:
  claude.ai → Settings → Memory

Set:
  MemoryOFF

Memory stores information about you across conversations.
If you discuss client matters in one conversation, that
context could surface in another.

Step 3: Delete Conversations After Use

Deleted conversations are removed from your history immediately and from Anthropic’s backend systems within 30 days. Deleted chats are excluded from training regardless of your settings. Make it a habit to delete any conversation that contained sensitive information.

Step 4: Review the Claude Desktop App

The Desktop app inherits the privacy settings of your claude.ai account. There are no separate privacy controls in the app itself. Ensure your web account settings are configured before using the desktop app.

Recommended Configuration: Claude for Work (Commercial Plans)

For any law firm handling confidential client information, Anthropic’s commercial plans are the appropriate choice. These are governed by separate Commercial Terms of Service that explicitly prohibit model training on your data.

Enterprise Plan Security Features

FeatureWhat It DoesWhy Lawyers Need It
SSO/SAML Integrates with your firm’s identity provider (Okta, Azure AD, etc.) Ensures only authorized personnel access Claude; enables central deprovisioning
SCIM Provisioning Automatically adds/removes users based on your directory Prevents former employees from retaining access
Custom Retention Set organization-wide data retention periods (min 30 days) Align with your firm’s document retention policy
Data Processing Addendum Contractual commitment on data handling Required to demonstrate “reasonable efforts” under Rule 1.6
Admin Console Central control over user settings, feedback, and data exports Supervisory obligation under Rules 5.1/5.3
Audit Logging Records of who accessed what and when Evidence of ongoing security monitoring; eDiscovery readiness

API with Zero Data Retention

For the highest level of protection, Anthropic offers a Zero Data Retention (ZDR) addendum for eligible API customers. Under ZDR, Anthropic does not store your inputs or outputs except where required by law or to combat misuse. This is the gold standard for firms handling highly sensitive matters.

Claude Code: Special Considerations

Claude Code is a command-line tool that operates directly on your filesystem. It can read files, write code, execute commands, and interact with your development environment. This creates unique risks in a law firm context.

What Claude Code Can Access

Data Policies by Account Type

Account TypeTraining?Retention
Free Pro / Max Default ON (opt-out available) 30 days (opt-out) / 5 years (opt-in)
Team Enterprise Never Configurable by admin
Commercial API Key Never 7 days (standard) / 0 days (ZDR)
Best Practice for Law Firms Run Claude Code with a commercial API key rather than a consumer account login. This ensures your code and file contents are governed by Commercial Terms with no training and minimal retention. Verify your configuration with /config in the Claude Code CLI.

Preventing Accidental Exposure

# Create a .claudeignore file in your project root
# (similar to .gitignore) to exclude sensitive files:

# Client documents
/client-files/
*.pdf
*.docx

# Credentials and secrets
.env
*.pem
*.key

# Database files
*.sqlite
*.db

Developing Your Firm’s AI Usage Policy

ABA Formal Opinion 512 and Model Rules 5.1/5.3 require firms to establish clear policies governing AI use. Your policy should address, at minimum, the following areas:

1. Approved Tools and Plans

Specify which Claude products and plan tiers are approved for firm use. Consumer plans should be restricted to non-client research only. Maintain a list of approved AI tools and review it quarterly.

2. Data Classification

Define categories of information and which can be used with AI tools:

CategoryExamplesClaude Usage
Public Published case law, statutes, public filings Any plan
Internal Firm templates, administrative documents, CLE notes Commercial plan preferred
Confidential Client communications, draft pleadings, contracts under review Commercial plan only; client consent required
Highly Sensitive M&A materials, trade secrets, medical records, sealed documents Enterprise + ZDR only; specific written consent

3. Client Disclosure and Consent

Draft a specific AI disclosure for engagement letters that goes beyond boilerplate language. Opinion 512 requires clients to understand how AI tools will be used in their matter, including what information may be processed by third-party AI services. Keep signed consent forms in the client file.

4. Training and Supervision

All attorneys and staff who use Claude must complete training covering: the difference between consumer and commercial plans, what information may and may not be entered, how to verify output accuracy, and how to report suspected data exposure. Document completion and refresh annually.

5. Incident Response

If confidential client data is inadvertently uploaded to a consumer Claude account:

  1. Delete the conversation immediately (removed from backend within 30 days)
  2. Document the incident, including what was uploaded and the account type
  3. Assess whether the “Help improve Claude” setting was on — if so, data may already be in training pipelines
  4. Contact Anthropic support to request data deletion if the setting was on
  5. Evaluate notification obligations under your jurisdiction’s rules
  6. Review whether client notification is required under Model Rule 1.4

Security Checklists

Print and complete these checklists with your IT administrator or developer. Retain the completed versions as documentation of your “reasonable efforts” under Model Rule 1.6.

Firm-Wide Configuration

Individual Account Settings (Consumer Plans — Non-Client Work Only)

Claude Code Configuration

Enterprise Plan Administration

Resources & References

ABA Ethics Guidance

ABA Formal Opinion 512 — Generative Artificial Intelligence Tools (July 2024)
Related ABA Opinions
Formal Opinion 477R — Securing Communication of Protected Client Information
Formal Opinion 08-451 — Outsourcing Legal and Nonlegal Support Services
Formal Opinion 93-379 — Billing for Professional Fees, Disbursements and Other Expenses
State-Level AI Guidance
California, Florida (Advisory Opinion 24-1), New York, New Jersey, Texas, Pennsylvania, Kentucky, Michigan, Missouri, and West Virginia have all issued jurisdiction-specific guidance. Colorado practitioners should check the Colorado Bar Association for state-specific requirements.

Anthropic Documentation

Third-Party Analysis

Final Note AI data policies change frequently. Anthropic has updated its terms multiple times in 2025 alone. Designate someone in your firm to monitor Anthropic’s Privacy Center and update this guide as policies evolve. Security is not a one-time configuration — it is an ongoing obligation.