The ABA's Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512 on July 29, 2024. If you're using generative AI tools in your practice — or planning to — this opinion is required reading. It won't tell you exactly what to do, but it tells you what questions you must be able to answer.

Here's a practical breakdown of what Opinion 512 actually requires, without the academic framing.

What the Opinion Covers

Opinion 512 addresses the use of generative AI tools — tools like Claude, ChatGPT, and legal-specific AI products — in the context of client representation. It works through the major applicable ethics rules: competence (Rule 1.1), confidentiality (Rule 1.6), communication and consent (Rule 1.4), supervision (Rules 5.1 and 5.3), meritorious claims (Rule 3.1), candor toward the tribunal (Rule 3.3), and fees (Rule 1.5).

The opinion is deliberately general. It acknowledges that AI tools are "a rapidly moving target" and that specific guidance will continue to evolve. What it provides is a framework — a set of questions you need to be able to answer before you use any AI tool on client matters.

Competence: You Don't Need to Be a Tech Expert, But You Can't Be Ignorant

Rule 1.1 requires "the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation," including understanding "the benefits and risks associated with the technologies used."

Opinion 512's position: you don't need to become an AI engineer. You need a reasonable understanding of what the tool does, what its limitations are, and what can go wrong. Practically, that means being able to answer:

  • Does this tool hallucinate? How often, and on what kinds of tasks?
  • What happens to my client's data when I submit a prompt?
  • How do I verify the output before relying on it?
  • Am I using this tool for a task it's actually suited for?

On Hallucinations Studies of leading legal AI tools found hallucination rates between 17% and 33%. That's not a reason to avoid AI — it's a reason to verify output. The obligation is independent review proportionate to the task, not blind acceptance of whatever the model produces.

Importantly, the opinion notes that as AI tools become more capable and widespread, the duty of competence may eventually require their use — just as lawyers are now expected to know how to use email and electronic research tools. Getting fluent with AI now is part of staying competent.

Confidentiality: The Hard Part

This is where most attorneys have gaps, and it's the most consequential section for day-to-day practice.

Rule 1.6 requires "reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation." Before inputting client information into any AI tool, you must evaluate the risks. The opinion lists the factors:

  • Likelihood of disclosure or unauthorized access
  • Sensitivity of the information
  • Difficulty of implementing safeguards
  • Extent to which safeguards would impair your ability to represent the client

What this means in practice: you cannot use a consumer-tier AI product for confidential client work without doing the analysis — and for most such products, that analysis will not come out in your favor. Consumer AI plans typically retain your data, may use it for model training, and give you no contractual data processing commitments.

The Cross-Contamination Risk Opinion 512 specifically flags self-learning models: a self-learning AI tool into which you input client information could surface that information in response to prompts from a different user on an unrelated matter. This is another reason why "no training" commitments from AI providers matter — and why you should verify them contractually, not just trust the marketing.

Client Consent: Boilerplate Doesn't Cut It

Rule 1.4 requires communication sufficient for clients to make informed decisions about their representation. Opinion 512 states that boilerplate consent is not sufficient — a generic clause in an engagement letter saying you may use technology does not meet the standard.

Informed consent requires disclosure of what tool is being used, how it handles client data, and what the client is agreeing to. That conversation needs to happen specifically, not buried in standardized language most clients don't read.

The practical implication: your engagement letters likely need updating. The conversation about AI tool use should happen when the representation begins, not after you've already run a client document through a model.

Supervision: Managing Staff AI Use Is Your Problem

Rules 5.1 and 5.3 place managerial obligations on supervising lawyers. You are responsible for ensuring that attorneys and non-attorney staff comply with the ethics rules — including in their use of AI tools.

Opinion 512 is clear that you cannot simply tell associates to use AI responsibly and consider the obligation discharged. You need a firm policy addressing:

  • Which tools are approved
  • What kinds of tasks they can be used for
  • What work cannot be submitted to AI tools
  • How output must be verified before relying on it
  • How billing works when AI assists in the work

For solo practitioners, the supervision obligation applies to yourself: you need a conscious, documented approach, not just informal habits.

Fees: Passing AI Costs Through Requires Thought

Rule 1.5 prohibits unreasonable fees. Opinion 512 raises the fee question without resolving it definitively — the guidance is still developing. But the basic framework: if AI dramatically reduces the time required for a task, billing the same hours as if the work were done manually may be unreasonable. And charging clients for AI tool costs as disbursements requires the same scrutiny as other expenses.

This isn't a reason to avoid AI. It's a reason to think carefully about your billing practices and update your engagement letter language to address AI-assisted work.

The Practical Checklist

Opinion 512 boils down to these questions, which every solo and small firm attorney should be able to answer:

Competence — Rule 1.1

  • Do I understand how this AI tool works, including its failure modes?
  • Am I independently verifying output before relying on it?
  • Am I staying current as the tools evolve?

Confidentiality — Rule 1.6

  • Have I reviewed the provider's data retention and training policies?
  • Have I executed a data processing agreement with the provider?
  • Is the data handling configuration appropriate for the sensitivity of this matter?
  • Have I considered local AI options for the most sensitive work?

Client Consent — Rule 1.4

  • Have I disclosed my AI tool use to clients specifically, not just generically?
  • Has the client given informed consent for their documents to be processed by this tool?
  • Is my engagement letter updated to address AI?

Supervision — Rules 5.1 / 5.3

  • Do I have a firm AI usage policy?
  • Does it address which tools are approved, for what tasks, with what verification requirements?

Fees — Rule 1.5

  • Are my billing practices for AI-assisted work reasonable given the time saved?
  • Have I updated my engagement letter to address AI-related costs?

Opinion 512 isn't designed to stop attorneys from using AI — the opposite, in fact. It positions AI fluency as part of the evolving duty of competence. What it requires is that you approach these tools with the same professional deliberateness you bring to any other consequential decision in a representation.