PatentNext Takeaway: Can text generated by artificial intelligence (AI) (e.g., an “AI-generated text”) constitute “prior art” pursuant to U.S. patent law? The answer to that question will impact whether AI-generated text can be used to preclude human inventions from issuing as patents in the United States. Third-party entities currently publish AI-generated text for the express purpose of preventing patent inventions from issuing. But this seems to run afoul of U.S. law requiring human “conception,” not to mention the U.S. Constitution, which seeks “[t]o promote the Progress of Science and useful Arts ….” Accordingly, it is possible that Congress or the courts will look not only the to related statutory text, but also to existing court decisions to preclude AI-generated text from constituting “prior art.”



Third parties, such as “All Prior Art,” currently create and publish “[a]logirthmically generated prior art.” All Prior Art. According to All Prior Art, its mission is “to algorithmically create and publicly publish all possible new prior art, thereby making the published concepts not patent-able.” Id. (About Us). All Prior Art acknowledges that most of its output is nonsensical: “The system works by pulling text from the entire database of US issued and published (un-approved) patents and creating prior art from the patent language. While most inventions generated will be nonsensical, the cost to computationally create and publish millions of ideas is nearly zero – which allows for a higher probability of possible valid prior art.” Id. 

All Prior Art outputs a paragraph of AI-generated text that looks like an abstract of a would-be patent. At the time of this writing, approximately 107,000 purported AI-generated paragraphs have been publicly disclosed via the All Prior Art website. 

This article seeks to address how such AI-generated text might not constitute “prior art.” In the U.S. “prior art” is defined by Section 102(a) of Title 35 of the United States Code as follows:

Novelty; Prior Art.—A person shall be entitled to a patent unless—

1. the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention; or

2. the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.

35 U.S.C. § 102 (“Novelty; Prior Art”) (emphasis added).

The below sections address relevant portions of Section 102(a) with respect to AI-generated text.

“The claimed invention was described in … a patent or application [that] names another inventor …”  (Section 102(a)(2))

Recent Federal Circuit case law establishes that only a human can be named as an inventor. In particular, the U.S. Court of Appeals for the Federal Circuit has ruled that U.S. patent law requires at least one human inventor. See Thaler v. Vidal, Case No. 2021-2347 (Fed. Cir. Aug. 5, 2022). See also PatentNext: Can an Artificial Intelligence (AI) be an Inventor? And the U.S. Supreme Court has since denied Stephen Thaler’s petition that attempted to alter that ruling. See PatentNext: The Future of AI Inventorship Following Denial of Stephen Thaler’s Petition.

Accordingly, at least for now, an invention must have a named human inventor. 

As shown in the statutory language above, Section 102(a)(2) defines “prior art” in the context of human activity. That is, to be “prior art,” a patent or published patent application must describe a claimed invention that names another inventor.

Because a human inventor is required pursuant to U.S. law (per Thaler), and because Section 102(a)(2) defines “prior art” as a patent or published patent application that names another inventor, then it is likely that AI-generated text, especially such text which wholly lacks any human input, would not constitute prior art pursuant to the plain language of Section 102(a)(2) itself. 

“The claimed invention was … described in a printed publication … or otherwise available to the public …” (Section 102(a)(1))

As shown in the statutory text above, Section 102(a)(1) requires “prior art” to describe the claimed invention in a “printed publication” or that the claimed invention is “otherwise available to the public.”

 Unlike Section 102(a)(2), Section 102(a)(1) does not expressly state whether prior art needs to “name[] another inventor” or require any other human-related activity. 

Thus, a broad and strictly textualist interpretation of Section 102(a)(1) could be that AI-generated text (e.g., a paragraph as output by All Prior Art) constitutes prior art. Under this strict textualist interpretation, a U.S. court might analyze whether a printed publication was sufficiently “in the public domain” and might give little or no weight to who (or what) created the “printed publication.” The fact that it was publically accessible for others to access or that it was “otherwise available to the public” would control.  

However, other courts may have different interpretations, especially given that the text of Section 102(a)(1) has been subject to judicial interpretation in the past. See, e.g., in re Wyer, 655 F.2d 221, 210 USPQ 790 (CCPA 1981) (finding that the term “printed publication” has a redundant and now unitary meaning in our modern digital age).

With respect to AI-generated text, a court may find that such output lacks human “conception” such that a given invention cannot be said to be described in a “printed publication” or “otherwise available to the public” pursuant to Section 102(a)(1).  

Under U.S. law, the definition of an “invention” includes the concept of “conception.” Conception is often referred to as a mental act or the mental part of invention. Univ. of Utah v. Max-Planck-Gesellschaft Zur Forderung Der Wissenschaften E.V., 734 F.3d 1315, 1323 (Fed. Cir. 2013). Conception “is ‘the formation in the mind of the inventor, of a definite and permanent idea of the complete and operative invention, as it is hereafter to be applied in practice.’” Burroughs Wellcome Co. v. Barr Labs., Inc., 40 F.3d 1223, 1228 (Fed. Cir. 1994) (citing Hybritech Inc. v. Monoclonal Antibodies, Inc., 802 F.2d 1367, 1376 (Fed. Cir. 1986) (quoting 1 Robinson on Patents 532 (1890)). “Because conception is an act performed in the mind, it has to date been understood as only performed by natural persons. The courts have been unwilling to extend conception to non-natural persons.” Univ. of Utah,734 F.3d at 1323; Beech Aircraft Corp. v. EDO Corp., 990 F.2d 1237, 1248 (Fed. Cir. 1993). 

Current generative AI tools do not “conceive” inventions in the same way as natural persons do. This is at least because they do not form inventions or ideas in a natural “mind” like natural persons do. Instead, current generative AI tools are statistical. For example, current large language models (LLMs), such as ChatGPT, output a string of tokens (e.g., syllables in a word) that seek to satisfy a statistical formula when provided with an input prompt. For example, an LLM chooses a string of tokens, one after another, to statistically satisfy what the next most likely token will be. In this way, words follow statically chosen tokens (e.g., syllables of words), sentences follow statically chosen words, and paragraphs follow statically chosen sentences, and so on, to generate a complete generative output. Said another way, the output of the LLM may appear coherent to a human reader, but in actuality, the output is simply a string of text predicted to read correctly. The LLM has no human-like understanding of its output, much less any “conception” of any supposed invention contained therein.

Such output, therefore, cannot be said to be an invention or otherwise idea conceived in a natural “mind” in the manner that natural persons do. Therefore, such output fails to satisfy existing inventorship law, which requires the act of conception to include forming a permanent idea of a complete and operative invention in the human mind.

Accordingly, under this interpretation, AI-generated text, whether described in “printed publication” or “otherwise available to the public,” fundamentally lacks “conception” as defined by the courts and thus cannot be a “claimed invention” pursuant to Section 102(a)(1).

This conclusion is consistent with the Federal Circuit’s Thaler decision, which found that only a human can be the inventor of a claimed invention or at least be named as an inventor on a patent application. 

Similarly, this conclusion is consistent with the USPTO Inventorship Guidance for AI-Assisted Inventions, which allows AI-assisted inventions only when a human has made a “significant contribution” to the invention pursuant to the Federal Circuit’s so-called Panuu factors. See PatentNext: The U.S. Patent Office provides Inventorship Guidance for AI-assisted Inventions.

Perhaps most importantly, this conclusion is consistent with the U.S. Constitution, which seeks “To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.”  U.S. Constitution Article I, Section 8, Clause 8. Said another way, the efforts of third parties such as All Prior Art, which seek to create AI-generated text for the purpose of “algorithmically creat[ing] and publicly publish[ing] all possible new prior art, thereby making the published concepts not patent-able,” seems fundamentally opposite of the intellectual property clause of the U.S. Constitution. 

Ultimately, the question of whether AI-generated text constitutes “prior art” will need to be decided by the courts or Congress. At least under the above interpretation rooted in human “conception,” the courts or Congress has a foothold to preclude AI-generated text from constituting “prior art” under Section 102(a) of Title 35 of the U.S. Code. 


Subscribe to get updates to this post or to receive future posts from PatentNext. Start a discussion or reach out to the author, Ryan Phelan, at (Tel: 312-474-6607). Connect with or follow Ryan on LinkedIn.