Discover more from Nonobvious
Drafting patents for AI: Tips for practitioners in 2023
Plus! A new battery from Sumitomo, the rise of ITC enforcement actions, and Elon Musk's X Corp's newest lawsuit
This year, clearly, artificial intelligence is having a moment. With the rise of large language models, or “LLMs,” and Generative AI, new technologies are becoming available overnight. Companies are beginning to offer services and agents that reduce the difficulty in providing services to their clients in areas like writing patents. This is quickly becoming big business, and where there is big business, there of course is intellectual property to protect that business.
But it isn’t just San Francisco AI companies turning to human patent attorneys in 2023. Patents related to Generative AI, as well as its older cousin machine learning, have increased 367% in just the last 3 years, and by a factor of nearly 20x since 2011. The largest patent holders—a mix of Chinese and American firms1—have been incredibly busy drafting patent applications, with about 10,000 active patent families as of this year. Surprisingly, medical is only the third major area for AI patents—transport and telecommunications rate higher. On top of that, machine learning patents have shown great success—machine learning patents had an allowance rate 77% during a period where software patent allowance rates plummeted by 20%2—meaning that anyone with a patent practice is likely looking at framing a software technology as an AI technology as a drafting strategy.3
So in this week’s issue of Nonobvious, we’re going to dive in to the drafting process for AI patents, with a particular focus on comparing and contrasting with normal software patent prosecution.
Learning to patent machines
A machine learning patent is not a one-size-fits-all description. After all, there are over 80,000 of them issued today. The typical description of a machine learning patent would be an algorithm with method claims, block diagrams, and so on. Some machine learning algorithms are indeed patented, and they look somewhat similar to a regular software patent. In fact, Google has an active patent for transformers, the technology behind Generative AI that underpins LLMs.4 Many, however, are clearly different of a different breed: they are structured as applications. These patents are drafted as various AI and machine learning techniques applied to a specific use. This approach is particularly appealing in light of McRo v. Bandai Namco Games (2016), which specifically held that failing to claim “specific rules” does not pose eligibility issues—and failing to present specific rules is precisely at the heart of how machine learning works. This holding was important for the patent system because, in a way, it limited Alice. Focusing on use-case patent claims is also a key part of the patent drafting guidance provided by Schwegman Lundberg & Woessner in a guide covering patent application preparation for AI specifically in healthcare.
The American Bar Association has put out a guide for drafting a quality patent application in the field of AI. It covers a lot of good ground, but one of the most useful ideas it covers is potentially including the combination of inputs in the patent claims. There are also some practitioners who are recommending focusing less on algorithms and more on functional, multi-layered claiming that emphasize technical advantages—in particular, being clear on how you define a technical advantage is already a good idea for software patents, and clarifying objectively what the technical advantage is and how it is measured is a good exercise to go through with the inventor during the patent drafting process. Furthermore, the art unit a patent falls into matters a lot due eligibility issues. The American Intellectual Property Law Association has found that Art Units 3600 and 3700 were the worst art units in the patent system for machine learning patents, primarily due to § 101 eligibility issues.
On top of that, patent practitioners should also consider the enablement aspects in your drafting process. Post Williamson v. Citrix Online (2015), functional claiming no longer suffers from a “strong” presumption of ineligibility under § 112. For machine learning algorithms, in particular, this is a boon, because machine learning applications by their very nature do not take an input and process to produce an output: they take inputs and outputs to produce a process. This lends itself well to functional claiming. With Generative AI, this is likely even more important because the processes are intended to also produce an output, and although there is no guidance yet, is seems likely that with Generative AI technologies examiners will want to see more enablement on this topic. USPTO has also upheld certain terms, like “detent mechanism,” and not others, like “system for.” Use these terms wisely. Keep in mind, though, that functional claims must disclose the structure that produces those functions at the level of one of ordinary skill in the art, particularly for specific applications or functions disclosed, for example in an embodiment. Illustrations are not a substitute for disclosing an algorithm, if necessary.
Some things are relatively similar to software patents. You will still be writing claims for methods, machines, compositions of matter, and articles of manufacture. You will still use figures that are mostly block diagrams and conceptual drawings. Claims will still want to comply with USPTO memos on, for example, eligibility and enablement. And nonobviousness will likely forever and always be a problem with software patents, so pay close attention to the prior art. In some ways, the more things change, the more they stay the same.
Weekly Novelties
As always, here is a weekly roundup of goings-on in the patent world.
Notable news items
An Israeli patent attorney included a moving tribute to victims of Hamas, and his granddaughter, in a patent application (World IP Review)
An interesting analysis of the rise of patent litigation in the ITC system, where exclusion orders are much faster than an Article III court (American Action Forum)
Arizona State University, a long-underrated science university, was ranked among the top-five universities without a medical school for its prolific inventiveness on several dimensions (ASU Press Office)
Latter-day litigation
X Corp v. Adeia Inc, U.S. District Court for the Northern District of California, No. 3:23-cv-06151: Elon Musk’s X Corp, formerly Twitter, has gotten in lots of hot water for not paying its bills. This new case involves patent licensing fees, but in a twist, X Corp is the one suing to seek a declaration of noninfringement
Lenovo Inc v. ASUSTeK Computer Inc, U.S. District Court for the Northern District of California, No. 3:23-CV-5892: Lenovo, the leader in the US laptop market, sued Asus alleging infringement of four patents. It is seeking an injunction to prevent the sale of Zenbook laptops
Nimitz Techs. LLC v. CNET Media, Inc., D. Del., 1:21-1247, mem. op. 11/27/23: In a highly contentious and visible case, a patent monetization company is accused by the judge of violating ethics rules around disclosure of its ownership structure
Gripping Gazette entries
US 11777090 B2: A patent from Sumitomo Metal Mining for nonaqueous secondary batteries
US 11823620 B1: Apple’s newest technology to get rid of the notch was just granted a patent
US 11828727 B1: ENDRA Life Sciences received a patent for a thermoacoustic probe design for ultrasound. This is its 39th patent
Eventful expirations
US 6651256 B1: A wearable pillow (only over the head, not full body) for stressful days
US 6651271 B2: A bathtub that uses a foot pedal to control the temperature of the water instead of requiring you to turn a knob (the theme for this week’s eventful expirations are “self care” and “comfort”)
US 6651267 B1: A toilet designed for people who are bedridden, by integrating it into the bed
This 20% number is a dramatic underestimate on the decrease in software patents. There has been a significant selection effect, where the weakest patents no longer even file.
If you’re interested, the Patent Office has put out research using AI to measure AI patent trends over time. Which is pretty meta.
It is not clear how enforceable this application would be. Claim 1 of the patent covers the original transformer model, which used encoders and decoders, but contemporary LLMs like GPT and BERT only use one or the other. Google has also had a primarily defensive patent strategy, but other companies with a similar posture, like Amazon, have abandoned those strategies in the past when competitive business needs required it.
Subscribe to Nonobvious
Patent news, intellectual property views, and technology miscellanea