PatentNext Summary: The Desjardins decision, co-authored by new USPTO Director John Squires, signals a potential shift toward greater patent eligibility for AI and software innovations. By vacating a § 101 rejection and warning that “categorically excluding AI innovations from patent protection in the United States jeopardizes America’s leadership in this critical emerging technology,” the Appeals Review Panel (ARP) emphasized that eligibility should not be used as a catch-all to reject claims better addressed under §§ 102, 103, and 112. For practitioners, the decision highlights the importance of describing concrete technical improvements in the specification, tying those improvements directly to the claim language, and framing claims as technological solutions rather than abstract ideas. This marks a potentially significant recalibration of the USPTO’s approach to AI-related claims under Director Squires’ leadership.

****

On September 26, 2025, the USPTO’s Appeals Review Panel (ARP), led by newly appointed Director John Squires, vacated a sua sponte § 101 rejection in Ex parte Desjardins. This decision, combined with Squires’ issuance of the first patents of his tenure earlier this month, suggests the pendulum may be swinging back toward patent eligibility for AI and software innovations.

The Decision in Brief

The ARP concluded that claims directed to training a machine learning model on multiple tasks, while preserving performance on prior tasks, integrated an abstract idea into a practical application, satisfying Step 2A, Prong Two of the Alice framework. Specifically, the panel credited the claims for improving the functioning of the machine learning model itself, citing reduced storage requirements, lowered system complexity, and the prevention of “catastrophic forgetting.”

Crucially, the ARP warned against overbroad § 101 rejections that risk stifling innovation in key areas of emerging technology:

Categorically excluding AI innovations from patent protection in the United States jeopardizes America’s leadership in this critical emerging technology.” – Desjardins Decision, p. 9.

The panel criticized prior reasoning that equated all machine learning with unpatentable algorithms on generic computers, emphasizing instead that software-based improvements can constitute technological improvements under precedents like Enfish and McRO.

Back to Basics: §§ 102, 103, and 112

In vacating the § 101 rejection, the ARP underscored that patent law already has the right tools to properly define the limits of protection:

At the same time, the claims at issue stand rejected under § 103. This case demonstrates that §§ 102, 103 and 112 are the traditional and appropriate tools to limit patent protection to its proper scope. These statutory provisions should be the focus of examination.” – Desjardins Decision, p. 10.

This passage signals that eligibility analysis should not do the heavy lifting of prior art or clarity rejections, potentially narrowing the role of § 101 going forward.

Director Squires and a New Era at the USPTO

Director Squires’ leadership is already being felt. Earlier this month, he issued the first patents of his tenure, one in medical diagnostics and one in distributed ledger technology, two areas that have faced § 101 headwinds in recent years. Combined with Desjardins, these actions reflect a broader policy recalibration toward enabling, rather than constraining, innovation in AI, software, and other cutting-edge fields.

Practical Implications for Patent Practitioners

  • Describe Technical Improvements in the Specification: The ARP relied heavily on specification passages describing performance gains, storage reductions, and training efficiencies. Drafting applications to highlight these improvements provides the foundation for successful Step 2A, Prong Two arguments.
  • Tie Improvements to Claim Language: The panel credited claim elements reciting performance preservation across tasks and reduced complexity, showing that well-drafted claims plus supporting disclosure can survive § 101 scrutiny.
  • Expect More Focus on §§ 102/103/112: With § 101 potentially receding as a catch-all gatekeeper, obviousness and enablement may again become the principal battlegrounds for AI and software claims.
  • Policy Winds Favor Eligibility: Director Squires’ early decisions and public statements suggest a USPTO more open to patenting emerging technologies, provided claims show concrete technical contributions.

Conclusion

Desjardins represents more than a single victory for one applicant. It may signal a USPTO-wide shift under Director Squires: away from categorical § 101 rejections and toward a balanced, innovation-friendly approach rooted in traditional patentability requirements. For AI and software innovators, this could mark the start of a new era of opportunity in securing robust patent protection.

PatentNext Summary: The USPTO issued “Reminders” for examiners in Tech Centers 2100/2600/3600 addressing §101 eligibility for software and Artificial Intelligence(AI) / Machine  Learning (ML)-related inventions; while not changing the MPEP, the guidance is meant to sharpen examination practice. It clarifies Step 2A, Prong One by limiting “mental process” to what can be practically performed in the human mind—stating that AI claim limitations not performable mentally are not “mental processes”—and by distinguishing claims that merely involve a judicial exception (e.g., Example 39) from those that recite one (e.g., Example 47). For Step 2A, Prong Two, examiners must evaluate the claim as a whole to identify a practical application, giving weight to meaningful additional limitations and to improvements in computer capabilities or a technical field, even if the improvement is only implicit in the specification. The Reminders caution against oversimplified “apply it” rejections, require a preponderance of evidence for “close call” §101 rejections, and reinforce compact prosecution that fully addresses §§102/103/112 for every claim in the first action.

****

The United States Patent and Trademark Office (USPTO) recently published guidance (dubbed “Reminders”) for patent examiners examining inventions in the Software-related arts, including in the Artificial Intelligence (AI) and Machine Learning technical fields. See USPTO.gov, “Reminders on evaluating subject matter eligibility of claims under 35 U.S.C. 101” (Aug. 4, 2025) (the “Software-related Invention Reminders”). The Software-related Invention Reminders are specifically directed to patent examiners focusing in Technology Centers 2100, 2600, and 3600, which commonly receive Software-related inventions. 

While the Software-related Invention Reminders purportedly do not change current examination practice pursuant to the USPTO’s Manual of Patent Examination Procedure (MPEP), they do serve as an important guidance for examiners (and as a tool for practitioners) practicing in these fields. 

The following summarizes key highlights of the USPTO Software-related Invention Reminders. 

Considerations under Step 2A, Prong One (“Patent Eligibility Groupings”)

The Software-related Invention Reminders include a first section regarding so-called Step 2A, Prong One groupings, which refer to the three groupings of “abstract ideas” that Section 101 rejections must fall within. These groupings include: (1) mathematical concepts, (2) certain methods of organizing human activity, and (3) mental processes.

Mental Process Grouping and Software-related Invention Inventions

Examiners commonly recite the mental processing grouping when rejecting software-related inventions. The Software-related Invention Reminders reminds examiners that the mental process grouping is “not without limits,” and that examiners are not to expand this grouping in a manner when claim limitations “cannot practically be formed in the human mind.” Id at 2. 

In an important section, the Software-related Invention Reminders remind examiners that “a claim does not recite a mental process when it contains limitation(s) that cannot practically be performed in the human mind, for instance when the human mind is not equipped to perform the claim limitation(s).”

With respect to AI-related inventions, the Software-related Invention Reminders forbid examiners from grouping an invention as a “mental process” when claim elements cannot be performed in the human mind: 

[Claim] limitations that encompass AI in a way that cannot be practically performed in the human mind do not fall within [the mental process]” grouping.

AI-Related Inventions: Distinguishing Claims that Recite a Judicial Exception from Claims that Merely Involve a Judicial Exception

The Software-related Invention Reminders also reminds examiners to exercise care to “distinguish claims that recite an exception (which require further eligibility analysis) from claims that merely involve an exception (which are eligible and do not require further eligibility analysis). Id. at 3 (emphasis added).

The USPTO’s Example 39 (an AI-related example) illustrates an example that merely involves an abstract idea, but does not recite one. Example 39 includes the limitation “training the neural network” but does not include or rely upon any mathematical concepts, calculations, formulas or equations using mathematical symbols. Id. Thus, Example is patent eligible because it is an AI-related invention that does recite a mathematical concept, even though it may involve them. Moreover, training a neural network is not something that a human mind can practically perform. For additional analysis of Example 39, see PatentNext: How to Patent an Artificial Intelligence (AI) Invention: Guidance from the U.S. Patent Office (USPTO).

By contrast, USPTO Example 47 (claim 2) specifically recites mathematical calculations performed by specific, known mathematical algorithms, e.g., “training, by the computer, the [Artificial Neural Network] ANN based on the input data and a selected training algorithm to generate a trained ANN, wherein the selected training algorithm includes a backpropagation algorithm and a gradient descent algorithm,” and thus “recites” a judicial exception (namely a mathematical concept). This results in the claim being vulnerable to further Section 101 analysis (and a Section 101) rejection, which presumably could have been avoided by excluding the mathematics related language. For additional analysis of Example 47, see PatentNext: The USPTO Issues Guidance on Patenting Artificial Intelligence (AI)-related Inventions per 35 U.S.C. § 101 (Subject Matter Eligibility).

Considerations under Step 2A, Prong Two (identifying a “Practical Application”)

After determining that a claim recites a judicial exception (e.g., an “abstract idea”) in Step 2A, Prong One, examiners then evaluate (under Prong Two) whether the claim as a whole integrates the recited judicial exception into a “practical application,” which can save a claim from being rejected under Section 101. 

Analysis of Claim as a Whole

The Software-related Invention Reminders reminds examiners that Step 2A, Prong Two requires consideration of calms “as a whole.” Id. at 3 (original emphasis). This requires identification of the so-called “practical application” in the context of the claims together as a whole, and not a separate and distinct feature (which examiners oftimes do).

Thus, additional limitations, which could form the practical application, “should not be evaluated in a vacuum, completely separate from the recited judicial exception.” Id. (original emphasis).

[An examiner’s] analysis should take into consideration all the claim limitations and how these limitations interact and impact each other when evaluating whether the exception is integrated into a practical application. 

While an additional limitation (or combination) that merely applies the judicial exception on a generic computer may not render a claim eligible on its own, an additional limitation (or combination) that meaningfully limits the judicial exception can render it eligible. Id.

Improvements (the “Gold Standard”)

The Software-related Invention Reminders also includes a section describing whether the invention recites an improvement to an underlying computing device or technical field. An “improvement” based argument is often considered the “gold standard” or otherwise best way to establish patent eligibility under Section 101. See, e.g., PatentNext: How to Patent Software Inventions: Show an “Improvement.”

The Software-related Invention Reminders instructs examiners to consult the specification to determine whether the disclosed invention improves an underlying computing device or otherwise a technical field, and evaluate the claim to ensure it reflects the disclosed improvement. Id. at 4. 

Importantly, the Software-related Invention Reminders reminds examiners that the specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Id. 

In addition, the claim itself does not need to explicitly recite the improvement described in the specification. Id.

Limitations on the “Apply It” argument in view of Improvements

Examiners can sometimes ignore or otherwise deemphasize improvements recited in claims with the oft made “apply it” argument – where examiners commonly reject claims that include or otherwise indicate improvements by arguing that such improvements amount to no more than a recitation of “apply it” (or equivalent) on a computer. The Software-related Invention Reminders caution examiners against this:

Examiners are cautioned not to to oversimplify claim limitations and expand the application of the ‘apply it’ consideration. 

The Software-related Invention Reminders provide several considerations for whether the “apply it” argument should or should not be made (as summarized by the below table):

#Reasons supporting an “Apply It” argument Reasons NOT supporting an “Apply It” argument 
1The claim recites only an idea of a solution or outcome, i.e., the claim fails to recite details of how a solution to a problem is accomplished. The claim covers a particular solution to a problem or a particular way to achieve a desired outcome.
2The claim invokes computers or other machinery merely as a tool to perform an existing process.The claim purports to improve computer capabilities or to improve an existing technology.
3General application of the judicial exception.A Particular application of the judicial exception.

Thus, from the above, the Software-related Invention Reminders guide examiners in considering whether the technological limitations are being used as a tool to improve the recited judicial exception (e.g., automating a manual business process) or whether the claim as a whole provides an improvement to technology or a technical field.

Claims that are determined to improve computer capabilities or improve technology or a technical field support a finding that the claim integrates the judicial exception into a practical application or amounts to significantly more than the judicial exception itself.

Additional “Reminders” on Whether to Make a Section 101 Rejection

“Close Calls”

For an application involving a “close call,” the Software-related Invention Reminders “remind” examiners to “only make a rejection when it is more likely than not (i.e., more than 50%) that the claim is ineligible under 35 U.S.C. 101.” Id. at 5 (citing MPEP 706(I) and stating that “the standard to be applied in all cases is the ‘preponderance of the evidence’ test.”). 

An examiner should not make a rejection simply because of uncertainty about the claim’s eligibility under Section 101. Id.

Compact Prosecution requires analysis of “every claim” 

Finally, the Software-related Invention Reminders also call on examiners to engage in “compact prosecution.” This calls for examiners to provide a complete examination for “every claim” under each of the other patentability requirements (e.g., 35 U.S.C. 102, 103, 112). Id. at 5. “Examiners should state all non-cumulative reasons and bases for rejecting claims in the first Office action.” Id. 

Arguably this requires examiners to rely on the other patentability requirements for examination, rather than using Section 101 as a “crutch” or otherwise shorthand for rejecting claims, especially for a first office action. 

****

Subscribe to get updates to this post or to receive future posts from PatentNext. Start a discussion or reach out to the author, Ryan Phelan, at rphelan@marshallip.com or 312-474-6607. Connect with or follow Ryan on LinkedIn.

PatentNext Summary: In a precedential decision, the U.S. Court of Appeals for the Federal Circuit reversed a district court’s §101 dismissal of patent claims relating to an automated system for dumbbell weight selection and adjustment, finding that the claims were not abstract under Alice step one and therefore are patent-eligible. The Federal Circuit held that, contrary to the district court’s conclusion, the claims included meaningful limitations which provide enough specificity and structure to satisfy § 101 even though the limitations were allegedly found in the prior art. The Federal Circuit re-emphasized the importance of considering patent claims in their entirety as a whole, which the district court improperly failed to do.

****

The U.S. Federal Circuit Court of Appeals reversed a decision of the U.S. District Court for the District of Utah, the latter of which had invalidated a set of patent claims claiming an automated system for dumbbell weight selection and adjustment, and remanded the case for further proceedings. PowerBlock Holdings, Inc. v. iFit, Inc., No. 24-1177 (Fed. Cir. 2025)

In the Utah district court, PowerBlock accused iFit of infringing U.S. Patent No. 7,578,771 (the ’771 Patent”) titled “Weight Selection and Adjustment System for Selectorized Dumbbells including Motorized Selector Positioning,” which “relates generally to exercise equipment” and more particularly “to selectorized dumbbells and to an overall, integrated system for selecting and adjusting the weight of a selectorized dumbbell or a pair of selectorized dumbbells.” ‘771 Patent at 1:15–19.

The district court held that the claims fail the two-step test and are patent ineligible because, at Alice step one, claims 1-18 and 20 of the ‘771 Patent were directed to the abstract idea of automated weight stacking and “implemented using generic components requiring performance of the same basic process”, and because, at Alice step two, claims 1-18 and 20 did “not add significantly more than the abstract idea of the end-result of an automated selectorized dumbbell.” Id. at *9.” (internal citations omitted).   

PowerBlock appealed this decision to the Federal Circuit Court.

The Federal Circuit court reviewed claim 1 as a representative claim:

   1.  A weight selection and adjustment system for a selectorized dumbbell, which comprises:

   (a) a selectorized dumbbell, which comprises:

(i) a stack of nested left weight plates and a stack of nested right weight plates;

(ii) a handle having a left end and a right end; and

(iii) a movable selector having a plurality of different adjustment positions in which the selector may be disposed, wherein the selector is configured to couple selected numbers of left weight plates to the left end of the handle and selected numbers of right weight plates to the right end of the handle with the selected numbers of coupled weight plates differing depending upon the adjustment position in which the selector is disposed, thereby allowing a user to select for use a desired exercise weight to be provided by the selectorized dumbbell; and

   (b) an electric motor that is operatively connected to the selector at least whenever a weight adjustment operation takes place, wherein the electric motor when energized from a source of electric power physically moves the selector into the adjustment position corresponding to the desired exercise weight that was selected for use by the user.

Based on its Alice step one analysis, the Federal Circuit determined that the district court incorrectly concluded that claim 1 is “directed towards the general end of automated weight stacking” because claim 1 “seek[s] to claim systems comprising weight selection and adjustment systems consisting of the two or three ‘generic’ components, rather than any particular system or method of selectorized weight stacking” (Id. at *6), thereby “giv[ing] rise to a preemption problem” Id. at *7.

Specifically, the Federal Circuit court found that the district court had erroneously ignored limitations required by claim 1 when the district court did not consider the limitations reciting “an electric motor, coupled to a selector movable into different adjustment positions, and energizing the motor to physically move the selector via the coupling between the motor and the selector.” Id. at *8-9. According to the Federal Circuit, the district court was wrong to ignore such limitations “merely because [the ignored limitations] can be found in the prior art” Id. at *11. 

As such, the district court did not properly consider, under Alice step one, the claims in their entirety to ascertain whether their character as a whole is directed to excluded subject matter” Id. at  *10 (internal citations omitted).

Further, the Federal Circuit warned “parties and tribunals not to conflate the separate novelty and obviousness inquiries under 35 U.S.C. § 102 and 103, respectively, with the step one inquiry under § 101.”  Id. at f.3.

****

This precedential Federal Circuit decision is promising for those pursuing patents directed to mechanical automation systems. Such practitioners should attempt to draft claims that provide physical structure and interaction while avoiding functional language that could be construed as abstract. Even if some of the physical components are known, their claimed combination and interaction may still yield patent-eligible subject matter. We note, though, that pure software-based automation may face tougher scrutiny.

Additionally, when reviewing office actions or during litigation, practitioners can utilize this decision to push back on §101 rejections that ignore the claim as a whole or conflate subject matter eligibility with novelty/obviousness.

Subscribe to get updates to this post or to receive future posts from PatentNext. Start a discussion or reach out to the author, Lilian Y. Ficht , at lficht@marshallip.com or 312-423-3445. Connect with or follow Lilian on LinkedIn.

PatentNext Summary: Recent rulings from the Northern District of California in Bartz v. Anthropic and Kadrey v. Meta provide the first substantive guidance on how the fair use doctrine applies to AI training, particularly for large language models (LLMs). Both courts found that using lawfully obtained copyrighted books for LLM training can qualify as “highly transformative” and support a fair use defense, while the use of pirated works may result in liability—especially if market harm is demonstrated. These cases highlight the growing legal emphasis on the source of training data and its market impact, offering a framework for AI developers to mitigate risk. The decisions underscore the need for lawful data acquisition, internal guardrails to prevent regurgitation of copyrighted content, and contractual protections for authors and data owners amid an evolving copyright landscape.

****

Recent rulings by two judges in the U.S. District Court for the Northern District of California offer the first merits-based guidance on how “fair use” applies to large artificial intelligence (AI) training, and in particular, language model (LLM) training. These decisions are Bartz v. Anthropic, 2025 WL 1741691 (N.D. Cal. June 23, 2025) (referred to herein as “Anthropic”) and Kadrey vs. Meta Platforms, 2025 WL 1752484 (N.D. Cal. June 25, 2025) (referred to herein as “Meta”). 

The courts found that using lawfully obtained copyrighted texts for training LLMs can be considered “highly transformative” and can fall under the copyright defense of “fair use,” but that using pirated materials could lead to liability, particularly if the use affects the market for the original works. These rulings shift the legal focus toward the source of training data and whether the AI model’s output causes market harm, setting the stage for future litigation around this issue.

The below article provides case overviews of the Anthropic and Meta cases, explores the four factors of the fair use copyright defense in view of LLM training  for each case, and concludes with related implications and takeaways for AI model developers, copyright owners, and AI model end users. 

Case Overviews

Bartz v. Anthropic PBC 

In Bartz v. Anthropic PBC, the court addressed the complex intersection between copyright law and artificial intelligence training. The plaintiffs — authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, along with their affiliated companies — brought suit against Anthropic PBC, an AI firm behind the Claude language model, alleging that Anthropic had unlawfully copied their copyrighted books. Anthropic assembled a massive digital library by both purchasing and pirating millions of books, which it then used to train large language models (LLMs), including Claude. 

At issue was whether Anthropic’s various uses of the copyrighted works — including training LLMs, digitizing print copies, using digital pirated copies, and maintaining a central research “library” (a digital database of the copyrighted books) — qualified as “fair use” under 17 U.S.C. § 107. The court evaluated each use against the four statutory fair use factors and found that while some uses were transformative and thus lawful, others — particularly the use of pirated copies to build a permanent library — were not protected under the fair use doctrine.

Kadrey v. Meta Platforms Inc.

In Kadrey v. Meta Platforms Inc., thirteen prominent authors, including Sarah Silverman and Junot Díaz, filed suit against Meta for allegedly using their copyrighted works—downloaded from unauthorized “shadow libraries”—to train Meta’s large language models (LLMs), particularly the Llama series.

The plaintiffs argued that Meta’s conduct could not qualify as fair use, focusing on harms to the market for their works and the unauthorized nature of Meta’s data acquisition. In contrast, Meta contended that its actions constituted fair use as a matter of law, emphasizing the transformative purpose of LLM training. The court granted summary judgment in favor of Meta, noting the plaintiffs’ failure to adequately substantiate the core theory that Meta’s use would cause significant market harm. However, the ruling applies narrowly to these plaintiffs and does not resolve broader questions about the legality of using copyrighted works in AI training.

Copyright “Fair Use” (Four Factor Analysis by the Courts)

Both the Anthropic court and Meta court considered the “fair use” of the copyrighted works. Fair use is a defense typically raised in U.S. copyright disputes and includes an analysis of a four-factor test. Fair use constitutes a defense to allegations of copyright infringement: 

[T]he fair use of a copyrighted work … for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use the factors to be considered shall include[:]

1. The purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;

2. The nature of the copyrighted work;

3. The amount and substantiality of the portion used in relation to the copyrighted work as a whole; and

4. The effect of the use upon the potential market for or value of the copyrighted work.

Anthropic at *6.

The following sections consider each of these four factors for both the Anthropic and Meta cases. In addition, the following sections focus on at least two stages of the AI model development and training process where AI model developers typically face copyright infringement: the first, when the AI model developer stores the copyrighted works in a computer memory for the purpose of training. The second stage is when the trained AI model produces an output – is such output the same or substantially similar as the original copyrighted work or a derivative thereof? For example, for the second of these, a court could focus on whether the output of a given AI model was significantly transformative as opposed to a copy or a derivative work of the original copyrighted material. An AI model can be probed via prompted engineering to determine whether it will output substantially similar works or derivative works from the original copyrighted material. See Getty Images v. Stability AI, Case 1:23-cv-00135, (D. Del. Mar. 29, 2003) (Amended Complaint) (Dkt. 13).  

Regarding the first of these stages, and as discussed further below, both the Anthropic and Meta courts were clear that training an AI model with copyrighted works was sufficiently transformative so as to support a fair use defense. In fact, at least according to these two cases, this is one of, if not the most important, factors for finding fair use in each of these cases. 

Regarding the second of these stages, in both Anthropic and Meta cases, the plaintiff-authors failed to allege that output of the respective LLM models produced a same or substantially similar output, and the courts were verbose in highlighting this failure of the authors. That is, had the authors provided additional evidence and arguments regarding a same or substantially similar output from the accused models, then the respective courts indicated that they would have readily (and eagerly) addressed this issue. The authors having failed to raise this issue, each of the Anthropic and Meta courts did not rule on the issue and instead highlighted the failure of the authors to do so. We can expect future plaintiffs to address the second of these in a more fulsome manner.

1. The Purpose and Character of the Use

This factor examines whether the use was transformative and whether it served a commercial or nonprofit purpose.

Bartz v. Anthropic PBC 

Regarding training LLMs, the court concluded that Anthropic’s use of the plaintiffs’ works to train LLMs was “spectacularly transformative.” Training involved complex processes like tokenization and statistical modeling to teach the LLM to generate new, human-like text. Importantly, the plaintiffs did not allege that the trained Claude system reproduced their works or outputs in a same or substantially similar manner (the hallmark of a copyright infringement claim). The court likened this to a person reading and learning from a book to become a better writer — a transformative use that did not usurp the market for the original works.

Regarding purchased Print-to-Digital book conversion, Anthropic also purchased millions of print books, scanned them, and stored digital copies in its central library. Because each scanned copy replaced its purchased print counterpart, and the digital format merely facilitated internal storage and searchability, the court deemed such use (i.e., a format change of the original purchased works) as contributing favorably to the first factor demonstrating fair use.

In stark contrast, regarding pirated digital book copies, the court found that Anthropic’s use of pirated copies to build a permanent, general-purpose library was not transformative. These copies were acquired to avoid “legal/practice/business slog” and were kept indefinitely, even when not used for training. The court emphasized that fair use does not grant AI developers blanket permission to steal and store works simply because some might later be used in transformative ways.

Kadrey v. Meta Platforms Inc.

The first factor—whether the use is transformative and/or commercial—strongly favored Meta. The court found that Meta’s use of copyrighted books to train its LLMs served a transformative purpose distinct from the original works. While the plaintiffs’ books were intended for consumption as literary or educational texts, Meta used them to extract linguistic patterns and structures to power a tool capable of responding to diverse user prompts.

Even though Meta’s ultimate goal was commercial, potentially generating up to $1.4 trillion in revenue over a decade, the transformative nature of its use was decisive. The court noted that copyright law generally gives more leeway to commercial uses when the new work adds something significantly new. The court also rejected arguments equating LLM training with simple repackaging or copying, noting that Meta’s models do not meaningfully output the plaintiffs’ original texts. In particular, Meta’s LLM was found incapable of reproducing any significant portion of the plaintiffs’ copyrighted books, even under conditions designed to provoke memorization. For example, the court noted that Meta’s expert employed an “adversarial prompting” technique specifically intended to elicit material from training data, yet no model produced more than 50 tokens (words and punctuation) from the plaintiffs’ works. The plaintiffs’ own expert achieved similar results in only 60% of tests using the most responsive Llama variant, and further confirmed that Llama was unable to reproduce any substantial portion of the books. Such findings supported the conclusion that Llama could not be used to read or meaningfully access the plaintiffs’ copyrighted works.

Further, Meta’s controversial use of shadow libraries, while potentially relevant to bad faith, did not outweigh the fundamentally different and transformative nature of the use.

2. The Nature of the Copyrighted Work

This factor considers the creativity and factual nature of the original works.

Bartz v. Anthropic PBC 

All of the plaintiffs’ books — both fiction and nonfiction — were published and expressive. The court acknowledged that expressive, creative works are closer to the “core” of copyright protection. Because Anthropic specifically valued these works for their expressive qualities in both training and building its library, the court found this factor weighed against fair use across all types of uses — even for those ultimately deemed lawful under other factors.

Kadrey v. Meta Platforms Inc.

This factor favored the plaintiffs. Their works—novels, memoirs, and plays—are highly creative and fall within the heartland of copyright protection. However, courts have historically afforded this factor limited weight, especially when the works have already been published. The court noted that while Meta may not have used the books for their creative expression directly, the statistical patterns it sought to extract were themselves a product of expressive choices like word order, syntax, and style—all protectable elements.

Nonetheless, the court did not view this factor as significantly altering the outcome of the fair use analysis, particularly in light of the highly transformative use under Factor One.

3. The Amount and Substantiality of the Portion Used

Here, the courts assessed whether the amount copied was reasonable in relation to the use.

Bartz v. Anthropic PBC 

Regarding training LLMs, although Anthropic copied the entirety of plaintiffs’ works for training, the court found this was reasonable given the monumental volume of text required for training effective LLMs. The absence of any public-facing reproduction of plaintiffs’ works further supported the finding of fair use.

Regardingpurchased Print-to-Digital book conversion, because the digital versions replaced the destroyed print copies and were not shared externally, the court held that copying the entire work was reasonable and aligned with the intended internal use.

In contrast, regarding pirated digital book copies, the court found that copying entire works from pirate sites — particularly to build a centralized research library of indefinite use — was not reasonable. The purpose extended beyond any specific transformative use, and the court noted that almost any level of unauthorized copying would be excessive under these circumstances.

Kadrey v. Meta Platforms Inc.

Although Meta copied the plaintiffs’ books in their entirety, the court held that this factor favored Meta due to the necessity of full-text ingestion for the transformative purpose of LLM training. The extent of the copying was deemed reasonable given the technical requirements of training such models. The court emphasized that the key consideration was not the sheer amount of copying, but whether the amount used was excessive in light of the use’s purpose.

Given that LLMs perform better with more high-quality data and that partial books would not serve the training purpose effectively, copying entire works was justified and did not weigh against fair use.

4. The Effect of the Use Upon the Market

This factor evaluates whether the use harms the market for or value of the original work. This factor is typically the most critical in a fair use analysis and posed the greatest challenge for the plaintiffs.

Bartz v. Anthropic PBC 

Regarding training LLMs, because there was no allegation that Claude’s outputs were infringing or substituted for the plaintiffs’ books, the court found no adverse market effect. Even potential market competition from LLM-generated works was deemed irrelevant under copyright law, which does not protect authors from generic competition.

Regardingpurchased Print-to-Digital book conversion, although Anthropic might have foregone purchasing digital copies, the court found no evidence of redistribution or market usurpation. The internal use of a legally purchased print copy — albeit in a different format — did not harm the existing market in a way actionable under copyright law.

Regarding pirated digital book copies,this use had a direct and deleterious effect on the market. By copying works it could have lawfully purchased, Anthropic displaced market demand on a copy-for-copy basis. The court emphasized that permitting such behavior would effectively destroy the publishing industry, as it would incentivize theft in the name of downstream transformative use.

Kadrey v. Meta Platforms Inc.

The court identified three potential types of market harm: (1) regurgitation of the original works, (2) loss of licensing revenue for AI training, and (3) market dilution through proliferation of similar AI-generated content.

The first two arguments failed due to insufficient evidence. Llama was not capable of meaningfully regurgitating the plaintiffs’ works, and courts do not recognize a right to licensing revenue for transformative uses. While the third argument—market dilution—was conceptually strong and could be highly relevant in future cases, the plaintiffs failed to plead or support it with evidence. Thus, they could not create a triable issue of fact on this point.

The court stressed that while market dilution from AI-generated content may be a valid concern under copyright law, it must be substantiated with evidence. As such, Factor Four also favored Meta.

Court’s Conclusions and Takeaways 

Bartz v. Anthropic PBC 

The Anthropic court’s overall analysis reflected a nuanced application of the fair use doctrine. It recognized fair use for the training of LLMs using copyrighted books, which was considered transformative. So was the scanning of purchased print copies for internal digital storage and use.

However, the Anthropic court denied the fair use defense for the use of pirated copies for building a central research library, which  was not considered transformative and failed all four fair use factors.

Nonetheless, the Anthropic court granted summary judgment in favor of Anthropic for the training and format conversion uses, but denied for the pirated library copies. The case is set to proceed to trial to determine liability and damages for the unauthorized acquisition and retention of those pirated materials.

This decision reinforces that while AI development may qualify for fair use under certain conditions, courts will scrutinize the methods and intentions behind data acquisition — especially where piracy is involved. AI innovators must balance transformative use with lawful sourcing to stay within the bounds of copyright law.

Kadrey v. Meta Platforms Inc.

The ruling in Kadrey v. Meta Platforms Inc. offers a nuanced but limited precedent. While Meta prevailed on summary judgment, the court’s decision hinged on the plaintiffs’ failure to develop and present a compelling case on the most critical issue—market harm. The decision does not validate Meta’s use of copyrighted works in AI training as lawful per se; rather, it underscores the importance of presenting the right evidence under the fair use framework.

This case may serve as a roadmap for future litigants—highlighting the potential viability of market dilution arguments and signaling that courts remain receptive to fair use challenges in the context of transformative AI technologies, so long as they are properly developed and supported.

Also, being the second of the two cases, the Meta court voiced its differences and concerns with the Anthropic court, stating that Anthropic court “focused heavily on the transformative nature of generative AI while brushing aside concerns about the harm it can inflict on the market for the works it gets trained on.” Id. at *11. The Meta court took issue with the Anthropic court’s reasoning that “[s]uch harm would be no different … than the harm caused by using the works for ‘training schoolchildren to write well,’ which could ‘result in an explosion of competing works.’” Instead, the Anthropic court was sympathetic to the plaintiff-author’s concern regarding market harm: “when it comes to market effects, using books to teach children to write is not remotely like using books to create a product that a single individual could employ to generate countless competing works with a miniscule fraction of the time and creativity it would otherwise take. This inapt analogy is not a basis for blowing off the most important factor in the fair use analysis.” Id. 

Conclusion

The court decisions involving Meta and Anthropic mark the beginning of what is expected to be a wave of legal rulings addressing copyright issues in generative AI. While these initial cases centered on large language models (LLMs) trained on books, future outcomes may vary depending on the nature of the training data and output. Notably, cases involving image-based content like Getty v. Stability AI or code-based output, such as in Doe 1 v. Github, Doe 1 v. GitHub, Inc., No. 4:22-cv-06823-JST (N.D. Cal.), may yield different legal analyses, highlighting the evolving complexity of copyright law as applied to various AI-generated modalities.

For example, such cases also explore the important question: what type of relationship should copyright holders have with AI model developers? Such a question is not only important to authors of books, articles, and other such written materials, but is also important to implications regarding computer software code, which is the foundation of most companies’ IP. For example, if an AI tool is used to create valuable source code for a companies’ product or service, who owns that source code (if anyone per the authorship requirements under U.S. copyright law), and is that source code subject to potential copyright violations for using or generating the same or substantially similar code from which the AI tool was trained from? 

Implications for Artificial Intelligence (AI) Model Developers. For AI model developers and related stakeholders—especially tech platforms, cloud providers, publishers, and data brokers—these decisions can signal a need for immediate action. Organizations should consider auditing their training datasets and vendor agreements to ensure all source materials are lawfully obtained, carefully document any market impact, and update internal policies accordingly. Legal and technical leaders should consider collaborating closely to align data practices with emerging legal expectations. For example, one approach from the Meta and Anthropic decisions can involve digitizing legally purchased physical books and then destroying the originals. To further reduce legal exposure, LLM developers can implement output guardrails that prevent or minimize the reproduction of copyrighted content. 

Implications for Copyright Owners. Copyright owners may want to keep sensitive data a trade secret. If desired, a copyright owner seeking to license its private data for training purposes may want to consider doing so under a license agreement that includes privacy restrictions pursuant to a non-disclosure agreement (NDA) to prevent the data from leaking to the public. One of the main issues for copyright owners in the Anthropic and Meta case was that the copyrighted works were public, such that the authors could not control their use for training. This will always be the case for books and other copyrighted works intended for public consumption. But for trade secret data, such as proprietary datasets, more control can be exercised to monetize valuable datasets for AI training.   

Implications for AI Model Users. Companies utilizing large language models (LLMs) can take key measures when contracting with LLM developers. First, they should consider auditing the training data by requesting a comprehensive list of datasets used to train or fine-tune the model, ensuring no pirated content from shadow libraries is included. Second, they could also consider verifying that the LLM incorporates effective guardrails to prevent the output of copyrighted material, with internal testing by creative staff to confirm their effectiveness. Finally, companies should consider negotiating strong indemnification provisions to protect against potential copyright infringement claims, recognizing that while current litigation has focused on developers, users may still face some legal exposure.

****

We can expect appeals from these cases and the appellate courts to take up these issues and provide guidelines. However, this could take several years, and these issues will likely find their way to the Supreme Court for ultimate resolution. This assumes, of course, that Congress does not act first to provide a statutory framework. 

****

Subscribe to get updates to this post or to receive future posts from PatentNext. Start a discussion or reach out to the author, Ryan Phelan, at rphelan@marshallip.com or 312-474-6607. Connect with or follow Ryan on LinkedIn.

PatentNext Summary: In Brightex Bio-Photonics, LLC v. L’Oreal USA, Inc., the U.S. District Court for the Northern District of California invalidated patent claims relating to AI-driven cosmetic recommendations, finding them directed to an abstract idea under 35 U.S.C. § 101. The court held that while the specification referenced artificial intelligence, the claims themselves failed to include any specific AI implementation or technological improvement. Brightex argued that elements such as a “photo guide” improved facial data acquisition, but the court found these features to be conventional and lacking inventive contribution. The decision highlights the importance of drafting software and AI-related claims that incorporate technical features demonstrating improvements to underlying technology, serving as a reminder for practitioners to align with established patent eligibility standards.

****

The U.S. District Court of the Northern District of California (N.D. Cal.) recently invalided a set of patent claims allegedly claiming artificial intelligence (AI) technology. Brightex Bio-Photonics, LLC v. L’Oreal USA, Inc., 2025 U.S.P.Q.2d 412 (N.D. Cal. 2025).

Brightex had accused L’Oreal of infringing U.S. Patent No. 9,842,358 (the ’358 Patent”) titled “Method for Providing Personalized Recommendations” in the field of cosmetology and specifically related to “the cosmetic improvement of a person’s face.” Id. at 2 (citing ’358 Patent at 1:8-10).

The court reviewed Claim 16 as a representative claim:

      16. A computerized method for providing prioritized skin treatment recommendations to a user, comprising:

receiving from an electronic device image data of a user’s face, wherein the electronic device comprises a camera and a display, wherein the image data is obtained via said camera, and wherein said electronic device presents on the display a photo guide indicating how the user’s face should be positioned with respect to the camera when the image data is obtained;

transforming via a computer said image data via image processing into measurements in order to identify at least two skin characteristics of the user from the received image data;

calculating a severity rating for each of the at least two user skin characteristics by:

accessing stored population information comprising measurements for at least two skin characteristics of a population of the same type as the at least two skin characteristics of the user, wherein each of the measurements for the at least two population skin characteristics comprises a mean value and a standard deviation value;

comparing each of the measurements of the at least two user skin characteristics to the measurements of same type population skin characteristic;

determining by how much each of the measurements of the at least two user skin characteristics deviates from the mean value and the standard deviation value of the same type population skin characteristic;

assigning higher severity rating to the user skin characteristic which deviates furthest than at least one standard deviation of the same type population skin characteristic; and

for a subset of the user skin characteristics with the highest severity rating, selecting or more skin treatment recommendations from stored skin treatment recommendations based on the subset of the user skin characteristic with the highest severity rating; and

providing to the electronic device the selected one or more skin treatment recommendations.

In its complaint, Brightex included a section describing the invention, including its “advanced and innovative technology relating to the recognition and computerized analysis of facial features.” Id. at 8.

The complaint also described how the invention used Artificial Intelligence (AI) with commercially available smart phone” technology “in order to accurately asses skin condition to recommend the correct cosmetics and skincare treatments.” Id.

L’Oreal filed a motion to dismiss the complaint (pursuant to Fed.R.Civ.P 12(b)(6)), arguing that the ’358 patent was invalid for being directed to an abstract idea without an inventive concept pursuant to 35 USC section 101. Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 573 U.S. 208 (2014). In particular, L’Oreal argued that the claims were directed to “the abstract idea of recommending treatments based on the severity of a person’s skin characteristics and rely solely on generic computer components to carry out that idea.” Brightex Bio-Photonics, 2025 U.S.P.Q.2d 412 at 10.

Brightex countered by arguing that a “photo guide” (as recited in the claims) is “used in a process specifically designed to achieve improved facial data acquisition and subsequently an improved identification of skin defects and the severity of those defects.” Id. at 13.

Thus, according to Brightex, the claim was sufficiently technical and should at least be allowed to proceed beyond the pleadings phase of the case.

The Northern District court disagreed. While the patent’s specification described AI features related to the invention, the failure of the patent to incorporate those features in the claims doomed the patent claim. In addition, the patentee had also failed to describe how the claimed photo guide provided an improvement to the underlying device – instead, the photo guide was claimed as used in a prior art manner, e.g.:

There is nothing in the claims or specification that suggests the “photo guide” is directed at solving any technological problem or doing anything more than ensuring the user’s face is positioned so as to obtain a usable digital image.

Id. at 27.

Accordingly, the Northern District court invalidated the claims as abstract and subsequently dismissed the allegations regarding the ’358 patent from the case.

The Northern District court’s treatment of the claims comes as no surprise. As I regularly discuss on PatentNext, as well as practice with respect to the patents I prepare for my clients, a patent drafter should incorporate technical features (e.g., such as AI features) into the claims themselves that demonstrate an improvement to the underlying device. The Federal Circuit has repeatedly identified this as one of three hallmarks for developing a strong software-based patent in the U.S. See PatentNext: How to Patent Software Inventions: Show an “Improvement.” Without this approach, a patent application can not only be subjected to a Section 101 rejection during prosecution, but a later-issued patent can also be invalidated for the same reasons, as was the case in Brightex Bio-Photonics.

Patent practitioners would be well served to prepare patent applications in accordance with this guidance, and this case serves as a cautionary tale for failure to do so.

****

Subscribe to get updates to this post or to receive future posts from PatentNext. Start a discussion or reach out to the author, Ryan Phelan, at rphelan@marshallip.com or 312-474-6607. Connect with or follow Ryan on LinkedIn.

PatentNext Summary: In two recent decisions, the Federal Circuit reaffirmed that merely applying artificial intelligence or digital techniques to a specific “field of use” does not satisfy patent eligibility under 35 U.S.C. § 101. In Recentive Analytics v. Fox Corp., claims directed to AI-assisted television scheduling were deemed abstract for lacking inventive implementation. Similarly, in Longitude Licensing Ltd. v. Google LLC, claims involving digital image correction were invalidated because they recited only functional, results-oriented language without explaining how the technical improvement was achieved. These rulings emphasize that to be patent-eligible, claims must include specific, technical details that demonstrate an actual improvement over prior art—not just a novel application of generic technology.

****

In a recent decision, the Federal Circuit found patent claims ineligible that claimed machine learning but otherwise applied generically to a “Field-of-Use,” i.e., to automatically scheduling regional television broadcasts. See Recentive Analytics, Inc. v. Fox Corp., No. 2023-2437 (Fed. Cir. Apr. 18, 2025). In that case, the Federal Circuit rejected the idea that applying AI to a novel domain—such as television scheduling— could rescue the claims. According to the Federal Circuit, a so-called “field-of-use” limitation is insufficient to render an abstract idea patent eligible. Merely moving generic AI into a different industry does not convert it into an inventive concept under 35 U.S.C. §  101 (patent eligibility). For additional discussion of Recentive, see PatentNext: Federal Circuit finds Generic AI Claims to be Abstract.

In a more recent decision, the Federal Circuit once again found generic “field-of-use” claims invalid under Section 101. See Longitude Licensing Ltd. v Google LLC, U.S.P.Q.2d 690 (Apr. 30, 2025). In the Longitude Licensing case, the Federal Circuit found invalid claims directed to performing digital image correction techniques via a computer. The patent specifications described identifying the subject, or “main object,” of an image and adjusting the main object image data by using “correction conditions,” which include any kind of “statistical values and color values” that correspond to the “properties” of the main object.

Claim 32 of one of the patents is representative and is reproduced below:

    32. An image processing method comprising:

determining the main object image data corresponding to the main object characterizing the image;

acquiring the properties of the determined main object image data;

acquiring correction conditions corresponding to the properties that have been acquired; and

adjusting the picture quality of the main object image data using the acquired correction conditions;

wherein each of the operations of the image processing method is executed by an integrated circuit.

The district court had found that claim 32 was abstract under Section 101 because claim 32 was generic, functional, and “ends-oriented.”

The Federal Circuit affirmed. In particular, the Federal Circuit cited its analysis in Recentive, finding claim 32 abstract because it generically recited the use of new data (e.g., the correspondence between the main object data and correction conditions as recited in claim 32) in the field of image processing but failed to disclose how to implement the concept. Like the claims in the Recentive decision, claim 32 in Longitude Licensing was a generic “field of use” claim where neither the claims nor the specifications describe how any improvement was accomplished. Claim 32 was abstract because it was “framed entirely in functional, results-oriented terms.” 

The Federal Circuit refused to save claim 32 by importing technical disclosure from the specification into the claim so that it provided the same degree of technical specificity as found in other Federal Circuit decisions demonstrating proper claim specificity. See McRo, Inc. v. Bandai Namco Games of America Inc., 837 F.3d 1299, 1313 (Fed. Cir. 2016) (as cited by the Federal Circuit).

Conclusion

The Longitude Licensing decision provides a further lesson for patent practitioners for drafting a patent application in a manner that adheres to the Federal Circuit’s three-part framework for demonstrating a technical “improvement,” which, if implemented correctly, should include (1) a description of the improvement in the patent specification; (2) a description of how the improvement differs from, and overcomes the prior art; and (3) inclusion of at least some aspect of the improvement in the claims. Claim 32 failed at least the third part of this test, and it was fatal for the plaintiff’s case. For more details on claiming an improvement, see PatentNext: How to Patent Software Inventions: Show an “Improvement.” 

****


Subscribe to get updates to this post or to receive future posts from PatentNext. Start a discussion or reach out to the author, Ryan Phelan, at rphelan@marshallip.com or 312-474-6607. Connect with or follow Ryan on LinkedIn.

PatentNext Summary: The Federal Circuit’s decision in Recentive Analytics, Inc. v. Fox Corp. found that applying generic machine learning techniques to a new environment, without a specific technological improvement, is patent-ineligible under 35 U.S.C. § 101. The court emphasized that claims must articulate concrete technological advancements rather than merely applying established methods to different domains. The ruling offers key guidance for patent practitioners, highlighting the need for detailed descriptions of technical innovation and cautioning against relying on field-of-use limitations or functional claiming. As AI technologies continue to advance, careful patent drafting that focuses on novel implementations will be critical for surviving eligibility challenges.

****

The Federal Circuit’s recent decision in Recentive Analytics, Inc. v. Fox Corp., No. 2023-2437 (Fed. Cir. Apr. 18, 2025), marks another significant moment in the evolving intersection of artificial intelligence (AI) and patent law. The ruling affirmed the district court’s dismissal of claims under 35 U.S.C. § 101, holding that applying generic machine learning to a new data environment—without claiming a specific improvement to the technology itself—constitutes an abstract idea and is therefore patent-ineligible.

This case is notable not just for its holding, but also for the clarity it offers on how courts are likely to assess the eligibility of AI-driven innovations going forward. For legal practitioners and applicants alike, the decision offers both a cautionary tale and a guidepost on how to craft applications that can survive § 101 scrutiny.

On a lighter note, the Federal Circuit did recognize the newness and importance of machine learning, and provided (in its conclusion) a statement qualifying its decision to generic machine learning patent claims:

Machine learning is a burgeoning and increasingly important field and may lead to patent-eligible improvements in technology. Today, we hold only that patents that do no more than claim the application of generic machine learning to new data environments, without disclosing improvements to the machine learning models to be applied, are patent ineligible under § 101.

Background: The Patents and the Invention
Recentive Analytics (“Recentive ”), whose machine learning technology has been used by the National Football League (NFL) to set its schedule, alleged that Fox used infringing software to schedule its regional television broadcasts, including NFL games. 

Recentive owned four patents across two families:

  1. Machine Learning Training Patents (U.S. Patent Nos. 11,386,367 and 11,537,960) – focused on dynamically generating optimized schedules for live television broadcasts using machine learning models trained on historical data.
  2. Network Map Patents (U.S. Patent Nos. 10,911,811 and 10,958,957) – addressed the generation of “network maps” that determine how television programs are displayed on specific channels in designated geographic markets.

According to Recentive, the traditional manual methods used by broadcasters were crude and incapable of responding to real-time changes in viewer preferences. Their technology purportedly provided a solution through dynamic, machine-learning-based scheduling and map generation.

After being sued for infringement, Fox challenged the validity of the patents under § 101. The district court agreed and dismissed the claims, finding them directed to abstract ideas implemented with generic machine learning techniques.

The Federal Circuit’s Analysis
The Federal Circuit affirmed the lower court’s ruling, reinforcing its approach to § 101 jurisprudence with respect to AI-related claims. Judge Dyk, writing for the panel and noting that the case presented a question of first impression, approached the central issue as follows:

“Whether claims that do no more than apply established methods of machine learning to a new data environment are patent eligible.”

The panel answered this question, finding that such claims were not patent eligible. The panel emphasized that merely using AI or machine learning in a conventional way is not sufficient to convert an otherwise abstract idea into patent-eligible subject matter.

The Federal Circuit found fault with Recentive’s patents for the following reasons. 

1. Generic Use of Machine Learning
The claims did not seek to protect a new machine learning algorithm. Rather, they involved applying conventional machine learning models—described broadly as “any suitable machine learning technique”—to an existing problem in broadcast scheduling. The specifications and claims did not articulate any modification or advancement in the underlying technology. As a result, the use of machine learning was deemed “generic,” and therefore abstract.

2. Lack of Technological Improvement
Recentive argued that their inventions offered a technical solution to a technical problem by dynamically generating schedules and maps. However, the court found that features like iterative training and dynamic data updates are inherent to machine learning itself and do not reflect any technological advancement. Without details about how these outcomes were achieved through innovation, the claims fell short.

3. Insufficient Implementation Details
Critically, the Federal Circuit emphasized that the patents failed to provide implementation details that would distinguish the claims from a mere directive to apply machine learning. The absence of delineated steps or specific algorithms meant that the claims amounted to aspirational goals rather than technical instructions.

4. Field-of-Use Limitations
The court rejected the idea that applying AI to a novel domain—such as television scheduling— could rescue the claims. A field-of-use limitation is insufficient to render an abstract idea patent eligible. Merely moving generic AI into a different industry does not convert it into an inventive concept under § 101.

5. Speed and Efficiency Are Not Enough
Finally, the court dismissed arguments based on performance improvements. Speed and efficiency gains, without a corresponding technological breakthrough, do not transform an abstract idea into patent-eligible subject matter.

Comparison to Past Precedents
Recentive sought to analogize its claims to precedents where software patents were upheld:

  • In Enfish, LLC v. Microsoft Corp., claims were found eligible because they recited a specific improvement to computer database functionality.
  • In McRO, Inc. v. Bandai Namco Games America Inc., the use of rule-based automation for lip-syncing yielded a technological improvement.
  • In Koninklijke KPN N.V. v. Gemalto M2M GmbH, the claims addressed error detection in data transmission—a concrete technical advance.

The Federal Circuit rejected these comparisons, stating that Recentive’s patents lacked the detailed implementations and clear technological benefits present in those cases.

Instead, the court likened the patents to those in Electric Power Group, LLC v. Alstom S.A. and SAP Am., Inc. v. InvestPic, LLC, where the claims involved collecting and analyzing data without describing how the methods improved technology.

Alice Step Two: The Inventive Concept
Under Alice Corp. v. CLS Bank International, step two of the eligibility test asks whether the claims contain an “inventive concept” sufficient to transform the abstract idea into a patent-eligible application.

Recentive pointed to the use of real-time data, dynamic outputs, and machine learning as their inventive concept. The court was not persuaded. These features were considered part and parcel of what machine learning already does. Since there was nothing unconventional about their use, the claims failed Alice step two.

Implications for AI and Software Patents
This decision illustrates a broader trend in AI patent jurisprudence: courts remain skeptical of claims that rely on generic use of machine learning without articulating technological innovation. Importantly, the court left the door open for AI patents that improve the underlying algorithms or computer functionality—but it signaled that “do it using AI” will not suffice. This is not surprising given that the Supreme Court’s Alice decision held that generic claims reciting, in effect, “do it on a computer” are also not patent-eligible.

Attorneys drafting AI-related patent applications must therefore be vigilant in distinguishing true technological advancements from applications of known techniques.

Best Practices: Drafting Patent Applications to Survive § 101 Challenges
The Recentive decision underscores the importance of meticulous drafting when seeking patent protection for AI-driven innovations. Below are some best practices to improve the chances of success:

1. Claim a Specific Technological Improvement
Avoid merely reciting the use of machine learning or AI. Instead, clearly identify a novel technical feature or architecture. Demonstrate how the invention changes the way a computer operates or how the algorithm improves performance.

2. Describe the Innovation in Detail
Include specific implementation steps, data flows, and algorithmic mechanisms. Vague language such as “any suitable machine learning model” invites eligibility challenges. Provide concrete examples and explain how the result is achieved.

3. Differentiate from Conventional Methods
Show how the invention departs from prior art or conventional techniques. Highlight not only what the invention does but how it accomplishes it in a novel and non-obvious way.

4. Avoid Field-of-Use Limitations
Ensure the inventive concept is not limited to the application of generic technology in a new context. Field-specific applications are insufficient unless coupled with a unique technical implementation.

5. Include Technical Benefits in the Specification
Tie the benefits of the invention—such as reduced computational load, increased accuracy, or novel data processing—to concrete technical improvements. Avoid framing benefits solely in terms of business advantages or efficiency gains.

6. Claim Structurally—Not Functionally
Whenever possible, claim system components, data structures, and processes in structural or algorithmic terms rather than abstract functional language. Courts are more likely to uphold claims that describe specific arrangements and processes.

7. Use Dependent Claims Strategically
Include dependent claims that recite specific machine learning models, feature extraction methods, or training protocols. This helps in narrowing the scope of the claims while preserving eligibility under § 101.

Conclusion
The Recentive decision serves as a timely reminder that AI-driven innovations must be carefully framed to withstand eligibility scrutiny. Generic applications of machine learning are unlikely to survive § 101 challenges unless tied to specific, concrete technological improvements. As AI continues to evolve, so too must the strategies employed to protect it through intellectual property.

Patent practitioners must adapt by focusing not only on the novelty and utility of an invention, but on articulating the technical “how” in a way that the courts will find both meaningful and eligible.

****

Subscribe to get updates to this post or to receive future posts from PatentNext. Start a discussion or reach out to the author, Ryan Phelan, at rphelan@marshallip.com or 312-474-6607. Connect with or follow Ryan on LinkedIn.

Recent headlines suggest that prominent technology CEOs are tossing tepid water onto the quantum computing narrative, leading to a sell-off of quantum computing stocks in January 2025. However, quantum computing CEO’s disagree, asserting that commercial quantum computers are already here and delivering value to clients. While the timeline for widespread quantum utility remains debated, one thing is undeniable: innovation in quantum computing is accelerating, and the evidence is clearly visible in the patent landscape. Understanding these patent trends offers a valuable, albeit early, glimpse into the technologies being developed, the companies leading the charge, and the potential challenges along the way. Just as provisional patent applications filed as far back as the launch of the iPhone in 2007 foreshadowed Apple’s release of the Vision Pro headset in February of 2024, the current quantum computing patent activity hints at the future direction of this transformative field. See U.S. Patent No. 15/972,985 incorporating by reference U.S. App. No. 60/927,624 and U.S. App. No. 61/010,126.

What is a Quantum Computer? Moving Beyond Bits to Qubits

To understand the difference between a classical computer and its quantum counterpart it may be useful to think of a classical computer like a light switch. The light switch can be either ON, representing a 1, or OFF, representing a 0. These 0s and 1s, called bits, are the foundation of all the computations that modern classical computers perform.

Now, imagine a dimmer switch instead of a simple on/off switch. A quantum computer leverages qubits that, unlike bits, are not strictly 0 or 1. Rather qubits can exist in a state of superposition. Think of it this way: a qubit can be both 0 and 1 at the same time, or anywhere in between, until it is physically measured. This “both at once” state is the superposition, and it dramatically expands the possibilities for computation. Qubits can also be linked together through entanglement, a quantum phenomenon where their “fates” are intertwined. As an example imagine these entangled qubits are actually two coins that are linked. Even if you separate them by a vast distance, flipping one coin and reading whether it is heads or tails also instantly determines the outcome of the other coin. Thus their “fates” are intertwined and they act as a single system, no matter how far apart they are. This interconnectedness further amplifies their computational power.

Because of superposition and entanglement, quantum computers can explore vast computational spaces far more efficiently than classical computers for certain types of problems. While classical computers tend to tackle problems step-by-step, quantum computers can explore many possibilities simultaneously. This gives them the potential to solve problems that are infeasible for even the most powerful supercomputers.

Overall Quantum Computing Patent Filing Trends in the US

Graph | Quantum computing patent filings in the US between 2002 - 2024

Figure 1: Quantum Computing Patent Filings in the US Over Time

Figure 1 illustrates the overall trend of quantum computing patent filings in the United States. The number of filings started to increase dramatically in the mid-2010s. However, the most recent data points for 2023 and 2024 may appear to show a decrease but it is crucial to interpret this in light of the ~18 month publication delay at the USPTO. Therefore, the filings for the most recent years are still underrepresented in this data, as many applications filed in 2023 and most filed in 2024 are yet to be published. Regardless, the overall trend clearly indicates a robust and rapidly expanding field of innovation within quantum computing.

Top Quantum Computing Modalities: Patent Filing Trends

While the overall quantum computing patent landscape shows strong growth, examining trends within specific physical realization methods, or “modalities,” provides a more granular understanding of the innovation landscape. Here, we delve into the patent filing trends for some of the top modalities, based on our analysis of patent activity: Superconducting, Annealing, Topological, Photonic, Trapped Ion, and Quantum Dot.Rejection Type Analysis: Navigating Patent Prosecution Challenges

Beyond filing trends, understanding the types of rejections faced by quantum computing patent applications is crucial for strategic patent prosecution. Analyzing rejection data provides insights into the patentability hurdles specific to this technology area. Here, we examine the distribution of rejection types for quantum computing patents in the US.

Graph | Six illustrations comparing patent application to publication delay

Examining patent filing trends across six quantum computing modalities (Figures 2-7) reveals a nuanced picture of innovation within the field, marked by a noticeable difference in scale. Superconducting (Figure 2) and Quantum Annealing (Figure 3) stand out with substantially higher patent filing volumes, with their y-axes scale depicted up to 80, indicating a significantly greater level of patenting activity compared to the other modalities with their y-axes scale depicted up to 15. Both Superconducting and Quantum Annealing demonstrate robust and sustained upward trajectories, suggesting consistent and major investment in these areas. In contrast, Topological (Figure 4), Photonic (Figure 5), Trapped Ion (Figure 6), and Quantum Dot (Figure 7) quantum computing exhibit considerably lower patent filing volumes. These modalities show more gradual filing patterns: Photonic and Trapped Ion display steady, albeit moderate, growth, while Topological and Quantum Dot are characterized by lower overall patent activity. This disparity in scale may reflect the relative maturity, investment levels, and perceived near-term commercial viability of Superconducting and Annealing technologies compared to the other modalities.

Overall Rejection Type Distribution

Bar chart | Distribution of rejection types for quantum computing patents

Figure 8: Distribution of Rejection Types for Quantum Computing Patents

Figure 8 presents the overall distribution of rejection types for quantum computing patent applications. As anticipated in many technology fields, 35 U.S.C. § 103 rejections based on obviousness are the most frequent, accounting for approximately 30% of all rejections. However, a significant portion of rejections also fall under 35 U.S.C. § 101 concerning subject matter eligibility, representing approximately 15% of rejections. The USPTO’s interpretation of § 101, particularly in the context of abstract ideas and laws of nature, can pose challenges for quantum inventions that may involve algorithms, mathematical methods, or fundamental quantum principles. 35 U.S.C. § 102 rejections for anticipation account for approximately 20% of rejections, indicating that a substantial number of quantum patent applications are being rejected based on prior art that anticipates the claimed invention. 35 U.S.C. § 112(b) rejections for definiteness, also around 20%, suggesting challenges in clearly and precisely defining the scope of quantum inventions in patent claims.

PTAB Case Study: Ex parte Cao – A Victory on Written Description and Subject Matter Eligibility

A recent Patent Trial and Appeal Board (PTAB) decision in Ex parte Cao (Appeal No. 2024-002159) illustrates the challenges and nuances of patent prosecution in quantum computing, particularly concerning 35 U.S.C. § 101 and § 112(a).

In Ex parte Cao, the applicant appealed a Final Rejection that included both § 112(a) Written Description and § 101 Subject Matter Eligibility rejections. The invention related to a hybrid quantum-classical computer system designed for solving linear systems of equations. The proposed system combined classical and quantum computers to leverage their respective strengths in tackling complex mathematical problems. The claims focused on a method and system for preparing a specific “quantum state” that approximated the solution, utilizing a “cleverly designed objective function.”

The Examiner argued that the specification lacked adequate written description for the broad claim term “generating an objective function that depends on . . .” under § 112(a), asserting that the specification provided only limited examples and did not sufficiently describe the genus of objective functions claimed. Furthermore, the Examiner contended under § 101 that the claims were directed to an abstract idea – a mathematical method for solving linear equations – and lacked the requisite “significantly more” to establish patent eligibility, even with the inclusion of quantum and classical computers in the claims.

In a significant win for the applicant, the PTAB reversed the Examiner’s rejections on both grounds.

1. Written Description – Specification Examples Can Be Key for Genus Claims:

The PTAB overturned the § 112(a) rejection, finding the specification did adequately describe the claimed invention. The PTAB’s key reasoning included:

  • Specification Provided Examples: The specification detailed specific “objective functions” and provided “specific implementation examples.”
  • Functional Characteristics Sufficient: While the claims used functional language (“generating an objective function that depends on . . .”), the PTAB found that the specification, by providing examples and defining the characteristics of a suitable objective function, sufficiently conveyed to a PHOSITA that the inventor possessed the claimed genus.
  • Distinguishing Vasudevan Software: The PTAB distinguished the case from precedent like Vasudevan Software, Inc. v. MicroStrategy, Inc., where claims lacked specification support. In Ex parte Cao, the claims were tied to specific elements described in the specification.

When drafting claims with broad, functional limitations, particularly in complex technologies like quantum computing, robust specification support is necessary. Practitioners should not merely repeat claim language in the specification. Practitioners should provide concrete examples and clearly describe the characteristics and functionality. This can be sufficient to establish written description, even for genus claims.

2. Subject Matter Eligibility (§ 101) – Focus on Technological Improvement and Practical Application:

The PTAB also reversed the § 101 rejection, finding the claims were not directed to an abstract idea. The PTAB’s reasoning emphasized demonstrating a technological improvement and practical application in computer-implemented inventions:

  • Quantum Computer as More Than a Generic Tool: The PTAB rejected the Examiner’s view of the quantum computer as simply a generic tool for mathematical calculations. They recognized that the inclusion of a “quantum computer, controlling a plurality of qubits . . . to prepare a quantum state” was not just “recitation of gathering data.”
  • Integration into Practical Application: The PTAB found this element “represents the focus of the invention and integrates the recited abstract idea into a practical application.”
  • Technology Improvement – Enabling Noisy Quantum Computers: The PTAB agreed with the Applicant that the invention provided a “technology improvement” by “enabling noisy quantum computers, which have limited circuit depth, to practically solve linear systems.” They cited the specification’s description of prior art limitations and the invention’s solution.

To overcome § 101 rejections, especially in software and computer-related inventions, practitioners should clearly articulate and emphasize the technological improvement and practical application provided by the invention. By showing how the invention improves the technology itself, solves a technical problem, or provides a tangible benefit in a practical field practitioners can overcome pesky § 101 rejections.

Implications for Patent Attorneys:

  • Detailed Specification is Paramount: Ex parte Cao underscores the critical importance of a well-drafted specification, rich with examples and detailed descriptions, especially when claiming complex technologies.
  • Focus on Technological Advancement: When facing § 101 rejections, frame your arguments around the technological improvement and practical application of the invention. Highlight how it solves a real-world problem and advances the state of the art.
  • PTAB Reversals are Possible: Even in complex cases with challenging rejections, a well-reasoned appeal brief, focusing on the legal principles and supported by the specification, can lead to a successful PTAB reversal.

While quantum computing may seem esoteric, the principles illustrated in Ex parte Cao are applicable and relevant to patent attorneys in various fields. By focusing on detailed specification support and clearly articulating the technological advancements of your client’s inventions, you can significantly increase your chances of overcoming Examiner rejections and securing valuable patent protection.

Data Source and Methodology

Please note that the charts and related information in this article were generated using information provided courtesy of Juristat. The patent data was obtained using custom keyword searches in the Juristat patent analytics platform.

Overall Quantum Computing Trends: The overall quantum computing patent filing trends were generated using the search query: “quantum computer”|”quantum computing”|”qubit”. Where | represents an OR operator.

Modality-Specific Trends: The patent filing trends for each of the six quantum computing modalities (Superconducting, Annealing, Topological, Photonic, Trapped Ion, and Quantum Dot) were generated using modality-specific keyword search queries. These queries included combinations of terms related to each modality, such as qubit types, technology names, and associated terminology.

Search Fields: Searches were conducted within the Title, Abstract, and Claims fields of patent applications.

Rejection Type and Tech. Center Breakdown by Modality Appendix

The following table provides a breakdown of rejection types by modality, offering a more detailed view of the patent prosecution challenges for each technology.

Bar Chart | Rejection type by modality appendix.

Appendix Figure 9: Rejection Type Breakdown by Modality

While the overall rejection distribution provides a general overview, examining rejection types by modality reveals further nuances. Some interesting findings:

  • Superconducting Quantum Computing patents tend to face a higher proportion of 102 (Anticipation) and 103 (Obviousness) rejections compared to 101 rejections. This might suggest that for this more mature modality, the focus of patent examination is more on novelty and nonobviousness over prior art, rather than fundamental subject matter eligibility.
  • Quantum Annealing patents, in contrast, exhibit a notably higher percentage of 101 (Subject Matter Eligibility) rejections. This is likely due to the nature of annealing inventions, which often involve algorithms, optimization methods, and system architectures that may be scrutinized for abstractness under § 101.
  • Topological Quantum Computing patents show a significantly elevated percentage of 112(a) rejections (Written Description and Enablement). This highlights the challenges in adequately describing and enabling these complex, cutting-edge inventions in patent applications, likely due to the theoretical and nascent stage of the technology.
  • Trapped Ion Quantum Computing patents display a higher percentage of 112(b) rejections (Definiteness), suggesting difficulties in clearly defining the scope of claims related to intricate ion trap systems and control methods.

In a recent PTABWatch article titled “PTAB Provides Some Clarity on Artificial intelligence (AI) Obviousness in IPR decision,” the PTAB’s approach to evaluating obviousness in AI-related patents is examined.  The article discusses the case Tesla, Inc. v. Autonomous Devices, LLC, where the PTAB invalidated all challenged claims of U.S. Patent Number 11,055,583, which pertained to an AI system for autonomous device operation.  The decision offers valuable insights into how prior art is assessed in the context of AI innovations.  Read the full article on PTABWatch.

Agentic AI is transforming artificial intelligence by enabling systems to act independently, making decisions and solving problems autonomously across various industries. Its potential rapid development poses unique challenges for intellectual property protection, requiring innovative strategies to ensure these advancements are effectively safeguarded within the evolving IP landscape.

Introduction

Last year, we explored how Multimodal AI, integrating multiple sensory modalities, continues to revolutionize human-machine interaction and spark discussions on its implications. This year, the focus shifts to Agentic AI—systems capable of autonomous decision-making, goal-setting, and action without human intervention.

Building on Multimodal AI’s ability to interpret diverse inputs, Agentic AI represents a leap toward proactive, independent systems. From adaptive robots to proactive software agents, its potential to transform industries is immense and raises critical questions about intellectual property.

This post explores the foundational technologies of Agentic AI and examines its patent implications, focusing on how the innovations driving these systems can be effectively protected.

What is Agentic AI

Agentic AI describes advanced artificial intelligence systems capable of operating with a high degree of autonomy. These systems are designed to independently make decisions, set objectives, and take actions to achieve predefined or dynamically determined goals. Unlike traditional AI, which typically functions as a reactive tool responding to specific inputs, Agentic AI leverages technologies such as reinforcement learning, advanced neural networks, and dynamic planning algorithms to proactively solve problems and adapt to complex, evolving environments.

To illustrate the distinction, consider Multimodal AI as a skilled interpreter capable of seamlessly integrating and understanding diverse types of inputs, such as text, images, and audio. In contrast, Agentic AI is akin to an autonomous executive, capable not only of interpreting information but also of strategizing, prioritizing, and taking initiative to achieve desired outcomes without requiring constant guidance. This evolution from passive responsiveness to active, goal-driven behavior underscores the transformative potential of Agentic AI in revolutionizing industries and solving real-world challenges.

Real-world applications of semi-autonomous AI are already making a significant impact across various industries, providing a glimpse into the potential of Agentic AI. For example, in logistics, AI-powered systems currently optimize supply chain operations by dynamically rerouting shipments in real time to mitigate delays caused by traffic or weather disruptions. In healthcare, AI tools analyze patient medical histories and laboratory results to recommend adjustments to care plans, supporting more personalized and effective treatment. In finance, algorithmic trading systems monitor market trends, identify opportunities, and execute trades with minimal human oversight, all while adapting to shifting market conditions within predefined parameters.

Hypothetically, Agentic AI could take these advancements further. In logistics, it might autonomously manage end-to-end supply chain operations, proactively negotiating contracts with suppliers and optimizing inventory in response to anticipated market trends. In healthcare, Agentic AI could monitor patient data in real time, independently coordinating with medical teams and adjusting treatments based on evolving conditions, such as the early detection of complications. In finance, it could act as a fully autonomous investment manager, dynamically reallocating portfolios, mitigating risks, and pursuing long-term growth strategies without the need for human intervention.

Advanced AI models, such as OpenAI’s GPT-4, illustrate how current technologies can process diverse datasets and support complex tasks, laying the groundwork for the development of truly agentic systems. These emerging capabilities showcase both the present utility and the future potential of AI systems capable of achieving unprecedented autonomy and flexibility.

Patent Implications of Agentic AI

The emergence of Agentic AI presents unique challenges in defining and protecting intellectual property, particularly in the United States, where subject matter eligibility remains an evolving issue.

Under 35 U.S.C. §101, AI-related inventions often face scrutiny as potentially abstract ideas. To secure patent protection, applicants should demonstrate that their innovations result in a tangible technical improvement. Patent claims should be carefully drafted to emphasize how the invention enhances the functionality or efficiency of a system, addresses a specific technical problem, or produces a concrete application. For example, claims specific to Agentic AI could explicitly recite technical details such as novel algorithms enabling dynamic goal-setting and decision-making by the AI system, unique methods of integrating hardware and software to facilitate real-time autonomous adaptations in response to environmental changes, or application-specific innovations that enhance the AI’s ability to independently optimize complex workflows, resulting in measurable improvements in system efficiency or user outcomes. Patent specifications should also clearly articulate how these advancements address technical challenges, such as mitigating unintended behavior in autonomous systems or improving the explainability of Agentic AI’s decision-making processes, and how they provide advancements over prior systems including non-agentic systems.

Conclusion

Agentic AI represents a transformative advancement in artificial intelligence, empowering systems to autonomously address complex challenges in various industries. Its integration is poised to drive unparalleled efficiency and foster groundbreaking innovation.

The patenting of Agentic AI technologies, such as training methods, model architectures, and application-specific solutions, is crucial for safeguarding these advancements. By crafting claims that emphasize technical improvements, innovators can ensure robust protection, enabling the continued development of this transformative technology.

****

Subscribe to get updates to this post or to receive future posts from PatentNext. Start a discussion or reach out to the author, Matt Carey, at mcarey@marshallip.com (Tel: 312-474-9581). Connect with or follow Matt on LinkedIn.