PatentNext Summary: Artificial Intelligence (AI) systems are expected to increasingly provide automated decisions impacting, for example, home ownership, job recruitment, and other important life events. In this way, such AI systems have the power to impact a wide variety of people and should be trained in a manner that eliminates bias and promotes fairness. The White House has recently published a Blueprint for an AI Bill of Rights that seeks to acknowledge and address these potentially inherent ethical risks of AI systems. 


Ethical Considerations of Artificial Intelligence (AI)

In the whitepaper titled “Protecting Inventions Relating to Artificial Intelligence: Best Practices” (as published by the Intellectual Property Owner (IPO)), I explored the ethical considerations of AI. 

There I noted that AI could be applied in ways that can have a disproportionate impact on people based on certain demographics such as sex, gender, race, and socioeconomic status. A disproportionate impact can arise, for example, when an AI model, such as a machine learning model, is trained on data that improperly takes into account such differences. That is, improper training, in some cases, can introduce statistical bias in an AI model where some groups of people are treated differently but where they otherwise should not be. 

An understanding of how an AI model is trained is illustrative of how statistical bias can arise in an AI model. Generally, in the field of computer science, the familiar phrase “garbage in, garbage out” applies especially to AI models. That is, if the data used to build an AI model is faulty, one can expect that the AI model would produce faulty output. This is because AI is fundamentally a data-driven technology. Training a model involves developing unique datasets as input to train AI computer models. Once trained, an AI computer model may take new data as input to predict, classify, or otherwise output results for use in a variety of applications. 

Therefore, an inherent danger lies in the use of incomplete, biased, or otherwise faulty datasets in training an AI model. If such faulty datasets are used, an AI model can fundamentally be trained to output faulty predictions, classifications, or otherwise outputs or decisions that can have a real-world impact on people’s livelihoods.

With this understanding, ethical aspects of AI could (and likely should) be considered in the development of AI models or systems, including those claimed as patentable inventions.

The White House’s AI Bill of Rights

On October 4, 2022, the White House’s Office of Science and Technology Policy published the Blueprint for an AI Bill of Rights. The AI Bill of Rights explores ethical considerations when deploying AI. For example, according to the White House, the AI Bill of Rights is intended to “support the development of policies and practices that protect civil rights and promote democratic values in the building, deployment, and governance of automated systems.” See AI Bill of Rights, About this Document

The White House recognizes the importance of AI and “automated systems” and their effect on modern-day life: “Automated systems have brought about extraordinary benefits, from technology that helps farmers grow food more efficiently and computers that predict storm paths, to algorithms that can identify diseases in patients.”

However, with the increased use of AI systems comes the increased risk of possible bias and discrimination. Consequently, the AI Bill of Rights is designed to combat such risks, including “algorithm discrimination,” which the White House defines as follows: 

Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law. 

The AI Bill of Rights promotes the following policies to overcome algorithm discrimination and other automated system biases. These include certain proposed “rights” of U.S. Citizens that designers, developers, and deployers should consider when building and using an automated system, such as an AI-based system:

  1. Safe and Effective Systems: Citizens should be protected from unsafe or ineffective systems. This includes subjecting automated systems to pre-deployment testing, risk identification, and mitigation. Automated systems should also undergo ongoing monitoring that demonstrates such systems are safe and effective based on their intended use. Such systems should practice mitigation of unsafe outcomes including those beyond the intended use and adherence to domain-specific standards.
  2. Algorithmic Discrimination Protections: Citizens should be free of algorithm discrimination, where automated systems are instead designed and used in an equitable way. Designers, developers, and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination. Independent evaluation and plain language reporting in the form of an algorithmic impact assessment, including disparity testing results and mitigation information, should be performed and made public whenever possible to confirm these protections.
  3. Data Privacy: Citizens should be protected from abusive data practices via built-in protections and users should be able to control their private data. Designers, developers, and deployers of automated systems should seek user permission and respect user decisions regarding the collection, use, access, transfer, and deletion of user data. In addition, data surveillance should be limited in scope. The use of surveillance should be subjected to heightened oversight when used.  
  4. Notice and Explanation: Citizens should know when an automated system is being used and should be provided with an understanding of how and why the automated system contributes to any personal impact. Designers, developers, and deployers of automated systems should provide generally accessible plain language documentation that includes clear descriptions of the overall system functions and the role automation plays, notice that such systems are in use, the individual or organization responsible for the system, and explanations of outcomes that are clear, timely, and accessible.
  5. Human Alternatives, Consideration, and Fallback: Citizens should be able to opt-out of an automated system, where appropriate, and seek as an alternative a decision as rendered by a human. Citizens should have access to timely human consideration and remedy by a fallback and escalation process if an automated system fails, it produces an error, or the citizen would like to appeal or contest the automated system’s output that has impacted the citizen. 

Applying the Blueprint for an AI Bill of Rights

The White House advocates applying the AI Bill of Rights as a “framework” that uses a “two-part test” to determine which AI systems or otherwise “automated systems” are in scope. This framework applies to “(1) automated systems that (2) have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services.”

This framework outlines protections that should be built into all automated systems that have the potential to meaningfully impact individuals’ or communities’ exercise of the following rights: 

  • Civil rights, civil liberties, and privacy, including freedom of speech, voting, and protections from discrimination, excessive punishment, unlawful surveillance, and violations of privacy and other freedoms in both public and private sector contexts;
  • Equal opportunities, including equitable access to education, housing, credit, employment, and other programs; or,
  • Access to critical resources or services, such as healthcare, financial services, safety, social services, non-deceptive information about goods and services, and government benefits.

Through the application of the AI Bill of Rights framework, the White House explains that it hopes to eliminate bias and discrimination from automated systems. 

The AI Bill of Rights is a “Blueprint” intended to be compatible with Existing Law 

The White House’s AI Bill of Rights does not constitute law. Rather it is merely a “blueprint.” In its legal disclaimer, the White House expressly acknowledges this: “The Blueprint for an AI Bill of Rights is non-binding and does not constitute U.S. government policy. It does not supersede, modify, or direct an interpretation of any existing statute, regulation, policy, or international instrument.”

However, the AI Bill of Rights is intended to be compatible with existing law. For example, the white house notes that the AI Bill of Rights describes principles already “required” by current U.S. law, including, for example, laws on government surveillance, data search, seizure, human review of criminal investigative and judicial review, discrimination, safety requirements for medical devices, as well as sector-specific, population-specific, and technology-specific privacy and security protections. See Relationship to Existing Law and Policy.

Accordingly, while the AI Bill of Rights does not have the force of law, in theory, a cause of action regarding an AI-based system could be brought under existing law. Alternatively, new laws could be created to specifically address the concerns or topics identified by the AI Bill of Rights. Therefore, practitioners should be mindful when developing AI or automated systems that could have an impact on existing or future citizen rights.


Subscribe to get updates to this post or to receive future posts from PatentNext. Start a discussion or reach out to the author, Ryan Phelan, at or 312-474-6607. Connect with or follow Ryan on LinkedIn.