The rise of legal tech is taking the legal field by storm. Every law firm is looking at the possibility of turning associates into super-associates, but that isn’t the entirety of the job. Part of the purpose of working at a law firm is training, both for the associates and the partners. If associates see their skills whither, the legal profession may be worse off in the long run despite the short-term productivity gain.
Over the past few weeks, numerous practitioners managing their law firms have asked me what the proliferation of AI models means for the legal field and their training programs. How will associates improve? Will they become dependent on these tools and be worse practitioners over time? And will writing patents by hand seem like teaching Navy sailors to navigate by the stars instead of GPS (which the military is actually doing!)
Thankfully, best practices are emerging from other industries that the legal profession can learn from. This week in Nonobvious, we’re going to talk about a few of them and what the future might look like.
Digital Transformation for a Legal Education
As we previously covered, some of the most intriguing studies have shown that artificial intelligence, and large language models specifically, helps the least experienced performers the most while making everyone significantly faster, including legal professionals. These speed gains can be truly enormous—one study from New Zealand conducted with senior attorneys found that LLMs performed certain tasks with high quality and a 99.97% reduction in task completion time. What this means is that new lawyers, who are often the bottleneck in producing patents, will be some of the greatest beneficiaries for purposes of firm productivity. For more experienced practitioners, they will benefit too; however, it will be more on the side of improving the speed with which they draft, at least with where the technology is today. For practitioners at smaller firms, this can be the difference between drowning or thriving, while at big firms this means freeing up partner time to spend with clients or managing associates even if the drafting quality only goes up slightly. Either way, it is a win, but one with interesting training implications.
Most notably in the finance industry, bankers are already using AI to accelerate primarily the learning of their associates. What they are doing is having associates use AI for some portion of the tasks while working by hand to train with others. With new lawyers, using AI at least some of the time has two main benefits. The first is that associates are not just being trained in the subject matter but also in the use of AI systems, which is the future of work. But the other is that the AI effectively acts as a 1-1 tutor. Tutoring is wildly effective for education. A 1984 paper by Benjamin Bloom found that regular students who received tutoring performed 2-sigma higher than average, an observation today called the “Bloom Effect.” This effect has been replicated, for example with “cognitive computer” programs. This means that associates get trained up faster. That said, you may still want to audit them by having them hand-write an application to ensure they are solidifying their legal education through spaced repetition, a proven educational method for solidifying understanding of long-term concepts.
The upshot of this accelerated learning is that by the end of the first year, a 2nd year associate can do the work of a 3rd year associate, which means they can even start to interface with clients earlier, bringing them into contact with the real meat of the practice of law—servicing the client and hearing their needs—earlier than before. It also means that while a new lawyer’s work might look quite familiar, a mid-level associate’s work might look quite different than it did in the past, perhaps a little more like a junior partner. Their legal practice will involve more client communication and working with AI systems more than completing routine tasks or rudimentary legal research. In turn, this will help young practitioners more quickly develop a sense of the client’s needs, the relationship between their work and legal and business objectives, and a sense of taste.
One way to think about it is that the job of training a practitioner is actually composed of two parts: training them to be an associate for 2-3 years and then training them to be a partner for 3-5 more years. With new AI tools, this instead becomes a single, long period of training to be a partner that lasts for their entire associate journey, but with a ramp-up period accelerated by generative AI.
The nature of legal work is going to change, which that means that the training needs to adapt too. While it is important to be able to write an application by hand, the work if the future is editing as much as it is writing from scratch. You may find that rather than teaching them some of the specific word choices you prefer as a partner you are “letting them in” to your job. Instead of editing their work, associates will work with partners to collaboratively edit the work of the AI, which is effectively the associates’ own associates. While practical legal education focuses more on the practice of routine tasks to develop intuition, the future will more involve practicing the higher-level work of focusing on client needs and legal outcomes from the start. While previously the province of more senior associates and partners, this will become part of training from Day One. Even as the use of generative AI and legal tech gets integrated into curricula by law schools, the practical apprenticeship of an associate position is still important. New lawyers will come into firms more trained on tools like Edge, just like they now come trained to use search tools like LexisNexis or Westlaw. But it will still fall on legal professionals to train new lawyers on the work of lawyering, which means working with clients and driving legal outcomes, not just answering questions in IRAC form.
One Harvard Business Review paper also argues that part of training also involves reskilling middle or senior employees. It is important to remember that generative AI is a very fast-changing field. There is no shame in relying on tech-savvy associates to help partners in adopting technology. In fact, partners may want to leverage associates to come to decisions more quickly and evaluate tools more closely with the people who will be using them regularly.
Law firms may find that the use of case studies in specific training modules will be faster, at least to start, than diving right in to using the tool on actual client matters. Typically, a law firm will begin training by throwing a young practitioner directly into client matters with strict supervision. With generative AI, the AI models themselves can be tutors and support automation. Therefore, preparing case studies that a young practitioner can start with may be quite helpful. The benefit of using case studies is that this can also be used to bring more experienced practitioners up to speed who are less familiar with AI models too at a rapid pace.
The future, in other words, may well be:
For younger employees, mix manual practice with AI-powered work
Check their work and show them how they could have edited AI output better
Train them to manage, edit, and be creative more than completing routine work
See how you can accelerate a 2nd year associate into 3rd year associate work earlier
Ensure training includes being AI-native tool users
How are you tackling this problem? We are interested in your view. We may cover some of the most interesting perspectives in a shout-out in next week’s Nonobvious.
Prior Art
Last year, we talked about ARM and how it created the world’s most voluminous chip business on the back of a patent licensing strategy. By avoiding capital expenditure and investing purely in R&D, ARM has been able to dominate RISC chips in a world where efficiency, especially with energy, is king.
This week was a major one for ARM’s strategy. After failing to catch fire in the GPU market with their Mali products, everyone is looking for an alternative to Nvidia’s expensive (and scarce) chips in AI. Nvidia’s GPUs are the most popular chips by far for both training and running AI workloads, but there is a sense that while GPUs are optimal for training they may not be for using the model once it is already trained. Google announced a new ARM-based chip for AI inference workloads. This could be the breakthrough the industry has been waiting for; a patent license-based chip could break through production bottlenecks by leveraging the manufacturing capacity of the many.
Weekly Novelties
Pfizer and BioNTech won a stay in Moderna’s patent infringement case against them while PTAB reviews challenges to two of the patents (Fierce Pharma)
The Federal Circuit took up an interesting case on false marking: Crocs calls its products “patented” but the only patents are design patents. Is that false advertising? And is that a patent issue or a consumer protection issue? (PatentlyO)
Edge was featured on PatentlyO for its use at University of Missouri in a patent drafting class (PatentlyO)
A jury awarded G+ Communications $142 million in a 5G patent case (Reuters)
USPTO put out a notice for proposed rule making governing PTAB director review process (IP Watchdog)
In the latest AI volley, Congressman Adam Schiff proposed legislation to require AI companies to publish all copyright materials used in the creation of an LLM before its publication (Billboard)