The ABA Has Come Out with Draft AI Ethics Guidelines
What do they say, and how do they stack up with the USPTO guidance? We dive in
Lawyers everywhere are looking for guidance on the ethical use of AI tools. At Edge, this is one of the top questions we get. For patent practitioners, there is thankfully now guidance from USPTO on the use of AI in a patent practice. But patent lawyers are also subject to ethics rules from their state bars, so lawyers are still waiting with bated breath.
The ABA has now issued a formal opinion on the matter, ABA Formal Opinion 512. This is coming just in time as Bar Associations all over the country are starting to issue (generally very favorable) guidance on the use of AI tools in the legal practice. The guidance from the ABA was generally favorable but somewhat vague; in many ways it was the same with USPTO. This week we will dive in to the guidance and what it means for practitioners, and where there is still a need to tighten it up.
Overview and Comparison to USPTO
Competence
Lawyers have a duty of competence to their clients. This doesn’t mean that lawyers have to win every case but that they must do their best and not make obvious mistakes. In the real world, lawyers engage in some level of delegation all throughout their jobs. They use associates and paralegals, send overflow work out to other attorneys, and even use software like spell-checkers, search tools, and statistics. A duty of competence is therefore, implicitly, a duty of oversight, both in the use of tools and in understanding “the benefits and risks associated with the technologies used to deliver legal services.” For the ABA, this means the following:
Understanding how AI tools work as a user; this does not require becoming an AI expert. The focus is on knowing the strengths and weaknesses of the tools
Keeping up to date on the fast-moving field of AI (one easy way to do that; subscribe to Nonobvious!)
Reviewing output for accuracy, bias, quality, and more; there cannot be “uncritical reliance” and lawyers may not “abdicate their responsibilities” specifically when “professional judgment” is involved
When it comes to competence, the ABA is shockingly forward-thinking and understands that AI may become required to provide competent council in the future. The guidance observes, for example, that one could not competently serve clients without sending email, creating electronic documents, or conducting online searches. What is not clear here is what level of materiality is important for competence; that term does not appear anywhere in the guidance.
The guidance also leaves unclear what the baseline is to avoid “abdicat[ing] their responsibilities.” I am generally critical of guidelines that treat machines more harshly than humans. If the machine makes mistakes as often and of a similar type as an associate, and the mistakes are of the same level of materiality, is a lawyer satisfying their ethical obligations by exercising the same level of care that they ordinarily would with a junior employee? Or does a lawyer have a higher standard—implicitly, perfection—when they use AI assistance?
USPTO makes the same observation, highlighting the “duties of competence and diligence.” Unlike with the ABA, USPTO does specify that the practitioner must act with “reasonable diligence,” though it still doesn’t say what that means. USPTO does require practitioners to “ensure that all statements in the paper are true” when it comes to facts, “confirming the accuracy of all citations to case law” and other references, and “ensure that all arguments and legal contentions are warranted by existing law.” To some extent, this does fly in the fact of how practitioners do their jobs today: patent attorneys do rely on boilerplate written by other people in their firm without checking every example in a definition, for example; they may also rely on information in issued patents without checking everything about it. When does this cross the line? That is as unclear.
On this issue, the ABA has a funny, short section on “meritorious claims and contentions and candor toward the tribunal.” In essence, there have been so many stories now of litigators in particular using ChatGPT-powered tools that have created false citations or spurious arguments. It is a particular reminder that these obligations are heightened when making statements before tribunals to cover even accidental misstatements.
Confidentiality
Confidentiality is one place where every lawyer asks questions, and rightly so. The ABA and USPTO have similar things to say on the matter. Essentially, lawyers must take “reasonable efforts to prevent the inadvertent or unauthorized disclosure” of client information. For IP, where disclosure can destroy patentability, this is particularly important. USPTO and the ABA are drawing from the same set of model rules here, so the guidance is relatively similar. The ABA does not set an unreasonable diligence standard here; lawyers should read the terms of service and privacy policy, and may want to talk to an outside expert in cybersecurity or IT. But the focus is on understanding the policies, not on requiring a full forensic analysis. This is quite reasonable and a good best practice.
The ABA takes a position that is balanced overall, allowing lawyers to assess the likelihood of disclosure, the sensitivity of the information, the difficulty of implementing safeguards, and even the potential impact of those safeguards on the quality of client representation. Many of the requirements—such as access control both in and outside the firm—are not unique to AI tools. “Self-learning” AI tools get a special shout-out for concern given the potential for confidential information to make become embedded in a general model that may be accessible to other customers of an AI service.
Although we will talk more about disclosure and consent in the next subsection, the ABA is quite clear that self-learning systems require client consent. It is important to note that this line does not apply generally to AI tools; just ones that are self-learning. Furthermore, the way that the ABA uses “self-learning” suggests that the particularly disclosures are required for training for generative systems only, not other categories like data aggregation for statistical purposes or discriminative models used for classification. Surprisingly, though, the ABA raises the bar by saying that “merely adding general, boiler-plate” to engagement letters is not sufficient, but it does not say what is. For lawyers who are making this decision now: self-learning is not the default for machine learning. Generally, consumer services like ChatGPT are self-learning; enterprise systems like Edge generally are not. You should ask, but know that self-learning is an active decision made by a software developer that requires work and maintenance. And you can always review cybersecurity certifications to help bolster your confidence, like SOC 2 Type II.
Communication
ABA and USPTO are most unclear on the topic of when to disclose the use of AI to clients. Both rely on the rule that lawyers must “reasonably consult with the client about the means by which the client's objectives are to be accomplished” but neither specifically require a duty to disclose the use of such tools. While the ABA says that disclosure is not always required, it may be, and it depends on “the facts of each case.”
There are a few clear bright lines. If a client asks whether a lawyer is using AI, they must answer honestly. And if the use of AI is germane to a justification for a fee that must be provided as well. One thing that is not quite clear is whether disclosure is required by the tool. The ABA suggests that disclosure is required when the lawyer “proposes to input information relating to the representation” into an AI tool. But what does that mean, and why is that required if the lawyer has already done their diligence on the tool per their obligations relating to competence and confidentiality? Lawyers provide information “relating to the representation” all the time into non-AI tools. Confidential workflow information is entered into Clio, with documents saved on Box, and researched on Lexis Nexis. You do not have an obligation to disclose those. And isn’t some level of disclosure required to use the tool—meaning that this would be a backdoor requirement to all uses of AI? The most interesting example of a required disclosure is when AI is used to help render a judgment rather than merely producing output. But would, say, a statistical ranking system require disclosure if the lawyer is making the decision? What if the ranking is only used to sort the items of interest for efficiency purposes? And which technologies would be required to count as “AI”?
What is a practitioner to do with this advice? While USPTO does not require you to disclose the use of AI tools to USPTO, it does require you to stand by your signature, and to disclose when a claim was not touched by a human inventor; it also puts practitioners on notice that requests may be made about the use of AI, specifically around inventorship. And while the USPTO guidance is vague and thus relatively permissive, the ABA rules leave very little that would not be required to disclose, possibly, but without defining well which types of tool use would require disclosure.
This is the area that will need the most clarification, and likely, revision.
Firm management
The most interesting subsections of the guidance—which shows how deeply the ABA thought about this problem—is areas that fall under the rubric of firm management (though the ABA does not group them these way). They include fees and supervisory responsibilities.
On the managerial side, ABA advises that law firms need to establish training programs, clear AI policies, and supervisory rules that take into consideration the AI tools they are using. The ABA highlights that lawyers have particular duties to nonlawyers in particular. Much of what is there is standard practice when using vendors and subcontractors, like reference checks and understanding their policies. If you have followed our series on how to buy software, you will largely be following these practices already.1
The other is fees. First, lawyers must clearly communicate any AI related surcharges clearly and ahead of time. This is simple and just good client management. Note that although the ABA says that this practice is not forbidden, though the ABA also says that “overhead” may not be charged back to clients; this is an ambiguity that will need to be resolved later (though the market will likely handle this; clients may not tolerate this for long). You also must be honest: if AI makes you faster, you must still be honest in your billing. We will see if there is increased ethical enforcement on this matter. The move to flat-fee billing for more matters will likely make this irrelevant for many fields.2 However, it is interesting that the ABA is taking a stance here (at least, ethically) at the efficiency conflict of interest inherent in the billable hours model. The ABA’s answer to lawyers is: deal with it. This is particularly notable given that the ABA has acknowledged that AI may be required to provide competent counsel in the future; arguably, this would create an ethical obligation for efficiency.
Weekly Novelties
Y Combinator held a policy event on AI, featuring FTC Chair Lina Khan and California State Senator Scott Wiener; the talks by the speakers are online (watch on YouTube, report on LinkedIn)
In the newest of a string of bills attempting to overturn the past two decades of SCOTUS patent jurisprudence, two senators introduced the RESTORE Act, which would undo eBay v. MerExchange, which was criticized for making it too difficult to get an injunction (IP Watchdog)
Amazon sued Nokia for patent infringement over networking and virtualization patents, a rarity for a company that normally sees itself as the plaintiff (The Register)
Sanofi sued Sarepta over manufacturing patents in the gene therapy space; cheap manufacturing is the key to reducing the cost of gene therapies, which are often 7-figures in cost (Fierce Bio)
Qualcomm signed a large licensing deal with Honor, an OEM (IAM)
Western Digital was hit with a $262 million jury verdict in a hard drive patent case (Reuters)
The Federal Circuit held that the PTAB did not overreach in a voice technology patent case, though it allowed the patent holder to preserve its claim construction arguments (Bloomberg Law)
One funny note from the ABA is to determine whether the vendor is an “attractive target” for cyberattacks. This is a little silly; every company of consequence, including every law firm, is an attractive target for cybercriminals.
Interestingly, though, the ABA does say that firms may have an ethical obligation to reevaluate the flat fees they charge if an AI tool makes them much more efficient. One imagines this will happen precisely never.