Artificial intelligence’s ability to process and analyze vast amounts of data has revitalized decision-making processes, making operations in healthcare, finance, criminal justice and other sectors of society more efficient and, in many cases, more effective. .
With this transformative power, however, comes a significant responsibility: the need to ensure that these technologies are developed and used in a fair and just way. In short, AI must be fair.
The pursuit of fairness in AI is not just an ethical imperative but a requirement to foster trust, inclusiveness and the responsible advancement of technology. However, ensuring AI is fair is a big challenge. And on top of that, my research as a computer scientist studying AI shows that efforts to ensure fairness in AI can have unintended consequences.
Why fairness matters in AI
Fairness in AI has emerged as a critical area of focus for researchers, developers and policy makers. It transcends technical achievement, touching on the ethical, social and legal aspects of technology.
Ethically, fairness is a cornerstone of building trust and acceptance of AI systems. People need to trust that AI decisions that affect their lives—for example, hiring algorithms—are made fairly. Socially, AI systems that incorporate fairness can help combat and mitigate historical biases—for example, those against women and minorities—which promotes inclusivity. Legally, embedding fairness in AI systems helps align those systems with anti-discrimination laws and regulations around the world.
Inequality can come from two main sources: the input data and the algorithms. Research has shown that input data can perpetuate bias in various sectors of society. For example, in hiring, algorithms that process data that reflects societal biases or lacks diversity can perpetuate a “like me” bias. These biases favor candidates who are similar to the decision makers or those who are already in an organization. When biased data is then used to train a machine learning algorithm to help a decision maker, the algorithm can propagate and even amplify those biases.
Why fairness in AI is hard
Fairness is inherently subjective, influenced by cultural, social and personal perspectives. In the context of AI, researchers, developers, and policymakers often shift to a level playing field for the idea that algorithms should not perpetuate or exacerbate existing biases or inequalities.
However, measuring fairness and incorporating it into AI systems is fraught with subjective decisions and technical difficulties. Researchers and policy makers have proposed different definitions of equity, such as demographic parity, equality of opportunity and individual fairness.
These definitions involve different mathematical formulations and underlying philosophies. They also often conflict, reflecting the difficulty of satisfying all fairness criteria simultaneously in practice.
Moreover, fairness cannot be distilled into a single metric or guideline. It encompasses a spectrum of considerations including, but not limited to, equality of opportunity, treatment and impact.
Unintended effects on fairness
The multifaceted nature of fairness means that AI systems must be scrutinized at every level of their development cycle, from the initial design and data collection stage to their final deployment and ongoing evaluation. This scrutiny reveals another layer of complexity. AI systems are rarely used alone. They are used as part of complex and often important decision-making processes, such as making recommendations for hiring or the allocation of funds and resources, and are subject to many restrictions, including security and privacy.
Research by myself and my colleagues shows that constraints such as computational resources, hardware types and privacy can significantly affect the fairness of AI systems. For example, the need for computational efficiency may lead to simplifications that neglect or misrepresent marginalized groups.
In our study of network pruning – a method of making complex machine learning models smaller and faster – we found that this process can unfairly affect certain groups. This is because the trimming may not take into account how different groups are represented in the data and the model, leading to biased results.
Similarly, although privacy-preserving techniques are critical, they can obscure the data necessary to identify and mitigate biases or disproportionately influence the results for minorities. For example, when statistical agencies add noise to data to protect privacy, this can lead to an unfair allocation of resources because the added noise affects some groups more than others. This discrepancy can also skew decision-making processes that rely on this data, such as resource allocation for public services.
These constraints do not operate in isolation but intersect in ways that increase their impact on fairness. For example, when privacy measures increase biases in data, it can further increase existing inequalities. It is therefore important to have a comprehensive understanding and approach to privacy and fairness for AI development.
The path forward
Making AI fair is not easy, and there are no one-size-fits-all solutions. It requires a process of continuous learning, adaptation and collaboration. Since bias is pervasive in society, I believe that people working in the AI field should recognize that perfect fairness is unachievable and instead strive for continuous improvement.
This challenge requires a commitment to rigorous research, thoughtful policy making and ethical practice. For it to work, researchers, developers and users of AI will need to ensure that fairness considerations are embedded in every aspect of the AI pipeline, from its generation through data collection and algorithm design to its deployment and after that.
This article is republished from The Conversation, a non-profit, independent news organization that brings you reliable facts and analysis to help you make sense of our complex world. It was written by: Ferdinando Fioretto, University of Virginia
Read more:
Ferdinando Fioretto receives funding from the National Science Foundation, Google, and Amazon.