T43. AI system non-discrimination assurance

Navigation: AIGA AI Governance Lifecycle > D. Risk and impacts > T43. AI system non-discrimination assurance (this page)

Task description

Many jurisdictions have non-discrimination laws and impose equal treatment requirements. Non-compliance with the non-discrimination laws and equal treatment requirements is incompatible with sustainable AI operations and may create significant legal and reputational risks.
 
The AI System Owner should ensure that the organization conducts and documents a non-discrimination assurance process to ensure that the AI system outputs are compliant with non-discrimination laws and equal treatment requirements. The legal advisory function should be involved in both designing and conducting the assurance.

Ensuring that an AI system creates no discrimination risk is challenging due to the nature of the non-discrimination and equal treatment. For example, under the Finnish Equality Act, an AI system would directly discriminate against a person if the system threated a person less favorably than other based on their age, nationality, language, religion, belief, opinion, political activity, trade union activity, family relationships, state of health, disability, sexual orientation, or other personal characteristics. Less favorable treatment is discrimination even if based on an apparently neutral rule. Despite the prima facie ban, differential treatment can be justified if mandated by law or the treatment has an acceptable objective in terms of basic and human rights, and the measures to attain the aim are proportionate.

Conducting a diligent non-discrimination assurance is particularly important for AI systems with algorithms developed using machine learning approaches. Machine learning approaches may result in inadvertent discrimination. As the algorithms are often unexplainable, detecting discriminatory bias may require the use of post-hoc analysis tools and real-world data AI system output testing.