PatentNext Summary: Announcing the IPO’s AI Patenting Handbook V3.0, which is a newly updated third edition of IPO’s practical guide for attorneys working with AI-related inventions and technologies. It offers a clear framework for understanding modern AI (including foundation models and generative AI), drafting and prosecuting stronger AI patent applications, and navigating enforcement, global practice, and emerging AI inventorship and governance issues—designed as a day-to-day reference for in-house and outside counsel.

****

I am excited to announce the publication of the Intellectual Property Owner (IPO)’s 3rd version of its Artificial Intelligence (AI) Patenting Handbook (the “AI Patenting Handbook”). 

The AI Patenting Handbook is a substantially expanded third edition of the IPO’s practitioner-focused guide to drafting, prosecuting, and enforcing patents on AI-related technologies. Prepared by members of IPO’s Software Related Inventions and AI & Other Emerging Technologies Committees, the handbook marries technical explanations of modern AI (including generative and agentic systems) with granular U.S. patent practice guidance, updated empirical §101 studies, and comparative insights from the EPO and key Asian and European jurisdictions. It also tackles hot topic issues such as AI inventorship and proposed legislation, aiming to give in-house and outside counsel a coherent playbook for protecting AI innovations in a rapidly evolving legal and policy landscape.

I had the honor of participating as an author along with fellow IPO committee members of the Software Related Inventions Committee and the Artificial Intelligence (AI) & Other Emerging Technologies Committee.

A brief summary of the handbook follows below. 

AI Patenting Handbook 3.0

The AI Patenting Handbook may be found here and covers various topics regarding Artificial Intelligence (AI) and patenting AI inventions. These include: 

I. AI Definitions and Technology Overview

This section creates a common vocabulary for AI patent practice by categorizing AI inventions, defining core technical concepts, and describing major AI architectures and modalities—from traditional machine learning through foundation models, generative AI, and agentic AI. It explains how different types of AI inventions (e.g., improvements to core AI, applications of AI, and inventions made using AI tools) can trigger different treatment at the USPTO and in the courts, and it walks through the “AI tech stack” and a typical model-development lifecycle so that legal and technical teams can speak the same language.

Key points:

  • Three AI invention categories: Distinguishes (1) improvements to core AI technology (architectures, training methods, hardware accelerators), (2) applications of AI in domain-specific systems, and (3) inventions conceived with the aid of AI tools, including human–AI collaboration and AI-only conception.

  • Shared terminology for ML/AI practice: Defines foundational terms such as architecture, ANN, bias, data labels, explainability, features, fine-tuning, hallucination, inference, loss functions, and training, all framed for later use in drafting and prosecution strategy.

  • Modern AI types and building blocks: Introduces concepts including agentic AI, attention mechanisms, CNNs, diffusion models, embeddings, foundation models, generative AI, LLMs, NLP, RNNs/LSTMs, reinforcement learning (including RLHF), RAG, supervised and unsupervised learning, and transformers, showing how they fit into the broader AI tech stack.

  • Model development and tech stack awareness: Outlines a typical ML lifecycle from data collection and labeling through training, evaluation, deployment, and inference, positioning this lifecycle as the backbone for later drafting checklists and interview questions.

II. Drafting AI Patent Applications

The second section translates the technical foundation into a practical drafting framework, focusing on how to conduct AI-specific invention disclosure interviews, how to claim AI inventions in a way that addresses §101 and §112, and how to build a specification that supports those claims. It emphasizes anchoring AI claims in concrete technical improvements—such as reduced latency, improved resource use, or enhanced robustness—rather than generic references to “machine learning,” and it underscores that traditional computer-implemented-invention principles still govern AI claims, even as examiners and courts scrutinize them through an AI-specific lens.

Key points:

  • Structured AI invention disclosure interviews: Provides detailed question sets for “setting the context” (where the invention sits in the AI tech stack), identifying the technical problem, surfacing technical advantages, and understanding data collection, preprocessing, post-processing, system architecture, training, inference, and how inferences are leveraged downstream.
  • Early alignment with §101 and prior art: Encourages framing the invention around specific technical challenges and improvements to support novelty/obviousness arguments and to show a “practical application” or technical improvement that can carry §101 eligibility.
  • Training vs. inference claims: Discusses tradeoffs of claiming training processes (often helpful for eligibility but harder to detect/enforce) versus inference or system-level claims, and urges practitioners to understand the full training pipeline, including loss functions, optimization, hyperparameters, and staged training (e.g., tuning, pruning, refinement).
  • Claim drafting under §101 and §112: Stresses that merely reciting AI buzzwords is insufficient; claims should highlight specific improvements to AI models or computing performance and be backed by a specification that adequately describes the model, its configuration, and how it achieves the asserted benefits, in light of enablement and written-description constraints.
  • Specification depth and structure: Recommends organizing the specification across multiple “levels of description” (e.g., system, model, training/inference, data pipelines) and explicitly tying problem/solution narratives to technical metrics such as reduced compute, better accuracy, or improved device operation, to support both claim scope and eligibility. 

III. Prosecution and Enforcement

The third section turns to how AI patents fare in the USPTO and the courts, combining doctrinal analysis with empirical data from 2023 and 2025 §101 studies. It outlines evidence-based strategies for responding to common AI-related rejections—including mental process, organizing human behavior, and mathematical concept categorizations—and distills trends from PTAB, Federal Circuit, and district court decisions, all with an eye toward how to draft and amend claims that will survive eligibility, §112, and enforcement challenges. It closes by highlighting “detectability” as a practical constraint when enforcing AI patents, especially where training activity or internal model details are opaque to patentees.

Key points:

  • Three core §101 response strategies: Recommends (1) arguing that the claim is not directed to a judicial exception, (2) demonstrating integration into a practical application by showing a technical improvement (often to AI technology itself), and (3) emphasizing additional elements that amount to “significantly more” than conventional solutions, adapted to AI-specific fact patterns.
  • Empirical §101 studies (2023 and 2025): Presents data on rising §101 rejection rates for AI-heavy art units and analyzes how examiners are applying the USPTO’s four-step test in practice, extracting lessons that can be fed back into both drafting and prosecution strategy.
  • Handling “mental process,” “organizing human behavior,” and “mathematical concept” rejections: Provides targeted argument frameworks and drafting suggestions for reframing claims as improvements to specific AI architectures, data pipelines, or computing systems, rather than as abstract data analysis or business rules.
  • Training claims and §112 issues: Discusses recent USPTO practice on training-related claims and common written-description and definiteness challenges in AI applications, stressing the need to disclose sufficient architectural, training, and data-handling detail to support broad ML claims.
  • PTAB, appellate, and district court case law: Surveys recent decisions involving AI or ML claims from the PTAB, Federal Circuit, and district courts, illustrating how tribunals assess technological improvement, claim drafting choices, and the sufficiency of AI-related disclosures.
  • Detectability and enforcement realities: Flags the difficulty of detecting infringement of training-stage steps or internal model operations (especially in cloud-based services), and encourages considering detectability when selecting claim types and drafting to observable behaviors or outputs where possible.

IV. Other Considerations

The final section broadens the lens beyond core U.S. drafting and prosecution practice to address ethics, global patent-office practice, AI inventorship, and emerging legislation. It connects AI patent strategy to evolving AI governance frameworks (including safety, discrimination, and transparency principles), walks through eligibility and examination approaches at the EPO, JPO, CNIPA, KIPO, and UKIPO, and analyzes how different jurisdictions are treating AI inventorship and ownership. The section concludes with an overview of pending and proposed legislative reforms that could reshape AI-related IP protection in the United States and abroad.

Key points:

  • Ethics and responsible AI: Links AI patenting to broader policy frameworks such as safety and effectiveness, algorithmic discrimination protections, data privacy, notice and explanation, and human oversight, as well as international instruments like the G7 code of conduct and the European AI Act, framing these as context for both invention harvesting and risk evaluation.
  • Comparative office practice (EPO, Japan, China, Korea, UK): Provides jurisdiction-specific guidance on eligibility, enablement, and claim-type preferences for AI inventions in major patent offices, including examples of allowable claim forms and discussion of case law and examination standards that may diverge from U.S. practice.
  • AI inventorship and ownership debates: Explores approaches to AI-assisted and AI-generated inventions, including cases such as DABUS and related policy discussions on whether, and how, AI systems can be reflected in the inventorship framework while maintaining the legal requirement that inventors be natural persons.
  • Proposed and emerging legislation: Surveys legislative proposals aimed at clarifying IP protection for AI-related outputs, regulating AI systems, or adjusting patent and copyright frameworks, and highlights practical implications for in-house and outside counsel planning long-term AI patent portfolios.

If you would like a copy of the publication, please let me know. I look forward to working on this year’s upcoming paper to advance these topics further. 

****

Subscribe to get updates to this post or to receive future posts from PatentNext. Start a discussion or reach out to the author, Ryan Phelan, at rphelan@marshallip.com or 312-474-6607. Connect with or follow Ryan on LinkedIn.