What Does AI Mean for PHOSITA?
Persons having ordinary skill in the art are the cornerstone of a 103 nonobviousness determination, but what's ordinary when AI makes everyone extraordinary?
One thing that is not obvious about nonobviousness is what it means and for whom it must be nonobvious. Patent law has responded with a standard that is, admittedly, still somewhat vague: the idea must not have been obvious to a person having ordinary skill in the art, or “PHOSITA” as first coined by Cyril Soans in 1966. Since the beginning of the development of the PHOSITA standard, it has been understood that “ordinary” means competent, but not gifted. In Hotchkiss v. Greenwood (1851), the Supreme Court colorfully described this person as a “skillful mechanic, not…the inventor,” which transformed by the Patent Act of 1952 to “person having ordinary skill in the art” as we know it today. Yet, despite a number of Federal Circuit and Supreme Court rulings over the past 150 years, that standard is still somewhat unpredictable.
What is clear is that a PHOSITA is someone at the normal, not cutting edge per se, of a field, using the types of tools that would be available to an ordinary person. And with the rise of AI, what happens when everyone becomes somewhat extraordinary? With LLMs, the gap between the skillful mechanic and the inventor is shrinking, and the bar overall is going higher. There are also potential implications for experts and evidentiary standards for courts. So this week at Nonobvious, we’re getting a little technical and discussing: does AI have any implications for how we decide what’s nonobvious?
Please subscribe! It’s the obvious thing to do.
AIs Giving People Extraordinary Skill
Today’s definition of a PHOSITA comes from the Supreme Court case KSR v. Teleflex (2007), and it defines a few key factors: it is an objective test independent of the motivation of the inventor but rather defined by the scope of the claims and the prior art; that it is done by a person who is reasonably skilled but not extraordinary; and proposed some concepts that can be used, most controversially “obvious to try.” The potential change in productivity arising from AI puts all of these concepts, and more, into potential disarray.
Consider, for example, the six factors in determining the skill level of a PHOSITA in Environmental Designs, Ltd. v. Union Oil Co.: the education level of the inventor, the kinds of problems in the art, prior art approaches, the speed of inventions, the sophistication of the technology itself, and the typical education of active workers in the field. It’s easy to see how AI potentially changes all of these. For example, at this point, it is clear that AI shrinks the productivity gap between less skilled and more skilled workers; does that make the educational factor less important? AI is now allowing inventors to solve previously-impossible problems in short timespans, like DeepMind’s AlphaFold—what does that do when AI-assisted individuals work impossibly fast?
Another major implication for AI is how we determine what was available to an inventor at the time. One of the assumptions of nonobviousnessness that the invention cannot be “readily deduced from publicly available material” per Bonito Boats, Inc. v. Thunder Craft Boats, Inc., 489 U.S. 141, 150 (1989). What follows is one of the most absurd assumptions in patent law: a PHOSITA is generally held to be not too bright (as the KSR court put it, “a person of ordinary creativity, not an automaton”) yet somehow omniscient. Although the Federal Circuit explained in Ruiz v. A.B. Chance Co. 357 F.3d 1270 (2004) that the prior art relevant to a PHOSITA is only art that is “reasonably pertinent to the particular problem with which the invention was involved,” in practice courts use a very wide scope. AIs today fit this bill. They truly know everything—some have described LLMs as a compressed version of the Internet—but they are also not considered to be highly creative or insightful (yet). As one other twist, the larger the company, the better the AI. Who do we decide is “ordinary” if we decide that AI can be considered part of the toolkit of a PHOSITA?
AI systems are getting better at working alongside inventors, and this poses questions for the KSR “obvious to try” standard. When it comes to “obvious to try,” unpredictability is a key aspect of that test, best expounded in In re O’Farrell, 853 F.2d 894 (Fed. Cir. 1988) that “the expectation of success need only be reasonable, not absolute.” There is also a general understanding that “obvious to try” means that one is drawing on a relatively finite number of potential combinations and concluding that the chance of success would be high, resulting in a nonobviousness reaction. But what happens when AI gets good at proposing options and stack-ranking them by likelihood of success? For example, DeepMind recently used a new AI to find 2.2 million potential materials. Courts have not truly addressed the question of computer-aided design, but as inventors become more of a cyborg, presumably the same becomes true of PHOSITAs as well. This is a particularly potent problem for fields like pharmacology and biology, where unpredictability of the result is arguably the most important Graham factor. In recent cases, like Teva Pharmaceuticals, LLC v. Corcept Therapeutics, Inc., Dockt No. 21-1360 (Fed. Cir. 2021), that was a deciding factor; that could get more complicated. At the same time, AIs can propose a vast testing space; does that put them out of the realm of “obvious to try” given that an ordinary person would be physically incapable of evaluating all of those options?
Lastly, one might consider whether an AI can be an aid to courts by acting as a synthetic PHOSITA. Generally, nonobviousness is a matter of law, so expert witnesses are not supposed to opine on the matter, leaving the courts on their own. But with AI, courts could potentially feed factors into different LLMs and see if they come up with the same idea as the inventor, and if so with what level of specificity (though they would have to ask it for its level of confidence; recall that “obvious to try” really leans on “obvious” more than “try” in recent Federal Circuit jurisprudence). As new GPTs come out, one could imagine courts limiting the knowledge cutoff of the LLM they use to determine whether the claims were predictable within the scope of the art of a particular time, use prompts to create a PHOSITA of a particular profile, and even gauge their creativity by using better or worse versions of GPT. (Imagine: “although this was discoverable by GPT 6.5, it was not for GPT 3.”)
AI stands to potentially change the way we think about every aspect of patent law, and this is an interesting, though unusual, application of that.
The Federal Circuit refused to grant Apple further relief in its dispute with Masimo now that the ITC has approved its workaround, and it will cease selling Apple Watches with the SpO2 feature (Bloomberg)
Judge Connelly, of the District Court of Delaware, recently defended his ability to compel live testimony in front of the Federal Circuit in a contentious patent financing case (Bloomberg Law)
An argument about when unpublished patents court for purposes of IPRs (IP Watchdog)
Ford abandoned a patent application that would have allowed lenders to remotely repossess a car; sometimes PR concerns trump protectability (The Record)
USPTO issued its new guidelines in light of Amgen v. Sanofi, mostly signaling it is not going to make any material changes (Federal Register)
Sometimes, you get a court case that makes you chuckle. This is one of them: a “single biomolecule” is held to refer to, yes, one biomolecule (Pacific Biosciences v. Personal Genomics (Fed. Cir. 2024))
Thanks for reading Nonobvious! Subscribe for free to receive new posts and support my work.