Our Take
California is codifying what courts already demand: lawyers cannot delegate professional judgment to AI systems, even for routine tasks.
Why it matters
Other states typically issue non-binding ethics opinions on AI use; California's enforceable rules would create disciplinary liability for inadequate AI oversight. Law firms need governance policies, not aspirational statements.
Do this week
Legal teams: audit your firm's AI governance policies by June 1 so you can identify gaps before California's rules advance.
California Proposes Six New AI-Specific Ethics Rules
The State Bar of California's Standing Committee on Professional Responsibility and Conduct has proposed amendments to six Rules of Professional Conduct that would create specific AI obligations for lawyers. The changes, approved at the committee's March 13, 2026 meeting, embed AI requirements into existing rules on competence, client communication, confidentiality, candor toward tribunals, and supervision.
The most significant change adds a new comment to Rule 1.1 requiring lawyers to "independently review, verify, and exercise professional judgment regarding any output generated by the technology that is used in connection with representing a client." The rule contains no exceptions for routine tasks or low-stakes matters.
Other amendments expand the definition of "reveal" in confidentiality rules to include exposing client information to AI systems where there is material risk of unauthorized access. The candor rule explicitly addresses AI hallucination, requiring verification of all cited authorities "including any cited authorities generated or assisted by artificial intelligence."
The rulemaking was initiated by the California Supreme Court itself in an August 2025 letter directing the bar to consider incorporating AI guidance into formal rules.
Enforceable Rules Create Disciplinary Liability
Most states addressing AI in legal practice have issued ethics opinions, which carry persuasive but not disciplinary force. California's approach creates enforceable obligations with potential sanctions for violations.
The independent verification standard is particularly strict. It prohibits casual reliance on AI-generated work product and requires personal lawyer review of every output used in client representation. This directly addresses the AI citation fabrication cases that have generated judicial sanctions across multiple jurisdictions.
The confidentiality amendments force lawyers to reconsider cloud-based AI tools with unclear data retention policies. Under the proposed definition, inputting client information into an AI system constitutes "revealing" confidential information if there is material risk the system or other users could access that data inappropriately.
Law firm management faces new governance requirements. Rules 5.1 and 5.3 would require managerial lawyers to establish AI procedures and ensure nonlawyer staff receive appropriate AI supervision.
Review AI Tool Selection and Supervision
Firms using AI tools need functioning governance policies that address data handling, output verification, and staff training. The proposed rules make clear that AI supervision extends to all personnel using these tools, not just lawyers.
The verification requirement applies to any AI output used in client representation. This includes research, drafting, citation checking, and document review. Lawyers cannot rely on AI accuracy assumptions, regardless of the tool's reputation or the task's apparent simplicity.
Client communication obligations depend on risk assessment. Routine AI use may not require disclosure, but lawyers must evaluate whether their AI use presents "significant risk" or "materially affects" the representation. This assessment must continue throughout the engagement as circumstances change.
The public comment period closed May 4, but the rulemaking process continues. The committee will review input and could modify proposals before they advance to the California Supreme Court for final approval.