Bias Is To Fairness As Discrimination Is To

Learn the basics of fairness, bias, and adverse impact. For the purpose of this essay, however, we put these cases aside. Hence, anti-discrimination laws aim to protect individuals and groups from two standard types of wrongful discrimination. Mich. 92, 2410–2455 (1994).

Bias Is To Fairness As Discrimination Is To Influence

Equality of Opportunity in Supervised Learning. As a result, we no longer have access to clear, logical pathways guiding us from the input to the output. The concept of equalized odds and equal opportunity is that individuals who qualify for a desirable outcome should have an equal chance of being correctly assigned regardless of an individual's belonging to a protected or unprotected group (e. g., female/male). Burrell, J. : How the machine "thinks": understanding opacity in machine learning algorithms. 2014) adapt AdaBoost algorithm to optimize simultaneously for accuracy and fairness measures. Today's post has AI and Policy news updates and our next installment on Bias and Policy: the fairness component. The point is that using generalizations is wrongfully discriminatory when they affect the rights of some groups or individuals disproportionately compared to others in an unjustified manner. Pedreschi, D., Ruggieri, S., & Turini, F. Measuring Discrimination in Socially-Sensitive Decision Records. AI’s fairness problem: understanding wrongful discrimination in the context of automated decision-making. Given what was highlighted above and how AI can compound and reproduce existing inequalities or rely on problematic generalizations, the fact that it is unexplainable is a fundamental concern for anti-discrimination law: to explain how a decision was reached is essential to evaluate whether it relies on wrongful discriminatory reasons. After all, generalizations may not only be wrong when they lead to discriminatory results. Algorithms may provide useful inputs, but they require the human competence to assess and validate these inputs. In addition to the issues raised by data-mining and the creation of classes or categories, two other aspects of ML algorithms should give us pause from the point of view of discrimination. A paradigmatic example of direct discrimination would be to refuse employment to a person on the basis of race, national or ethnic origin, colour, religion, sex, age or mental or physical disability, among other possible grounds. This explanation is essential to ensure that no protected grounds were used wrongfully in the decision-making process and that no objectionable, discriminatory generalization has taken place.

Is Discrimination A Bias

Techniques to prevent/mitigate discrimination in machine learning can be put into three categories (Zliobaite 2015; Romei et al. Moreover, the public has an interest as citizens and individuals, both legally and ethically, in the fairness and reasonableness of private decisions that fundamentally affect people's lives. 1 Using algorithms to combat discrimination. These include, but are not necessarily limited to, race, national or ethnic origin, colour, religion, sex, age, mental or physical disability, and sexual orientation. A more comprehensive working paper on this issue can be found here: Integrating Behavioral, Economic, and Technical Insights to Address Algorithmic Bias: Challenges and Opportunities for IS Research. Adebayo, J., & Kagal, L. (2016). Both Zliobaite (2015) and Romei et al. In this paper, however, we show that this optimism is at best premature, and that extreme caution should be exercised by connecting studies on the potential impacts of ML algorithms with the philosophical literature on discrimination to delve into the question of under what conditions algorithmic discrimination is wrongful. First, we will review these three terms, as well as how they are related and how they are different. The next article in the series will discuss how you can start building out your approach to fairness for your specific use case by starting at the problem definition and dataset selection. Insurance: Discrimination, Biases & Fairness. What is Jane Goodalls favorite color? And (3) Does it infringe upon protected rights more than necessary to attain this legitimate goal? Hence, using ML algorithms in situations where no rights are threatened would presumably be either acceptable or, at least, beyond the purview of anti-discriminatory regulations. Lum, K., & Johndrow, J.

Bias Is To Fairness As Discrimination Is To Justice

This position seems to be adopted by Bell and Pei [10]. For instance, to decide if an email is fraudulent—the target variable—an algorithm relies on two class labels: an email either is or is not spam given relatively well-established distinctions. All Rights Reserved. Moreover, notice how this autonomy-based approach is at odds with some of the typical conceptions of discrimination. However, a testing process can still be unfair even if there is no statistical bias present. Foundations of indirect discrimination law, pp. Bias is to fairness as discrimination is to justice. Fair Boosting: a Case Study. They cannot be thought as pristine and sealed from past and present social practices. If belonging to a certain group directly explains why a person is being discriminated against, then it is an instance of direct discrimination regardless of whether there is an actual intent to discriminate on the part of a discriminator.

Explanations cannot simply be extracted from the innards of the machine [27, 44]. For instance, treating a person as someone at risk to recidivate during a parole hearing only based on the characteristics she shares with others is illegitimate because it fails to consider her as a unique agent. Defining protected groups. Dwork, C., Immorlica, N., Kalai, A. T., & Leiserson, M. Decoupled classifiers for fair and efficient machine learning. 119(7), 1851–1886 (2019). We thank an anonymous reviewer for pointing this out. If everyone is subjected to an unexplainable algorithm in the same way, it may be unjust and undemocratic, but it is not an issue of discrimination per se: treating everyone equally badly may be wrong, but it does not amount to discrimination. Bias is to fairness as discrimination is to influence. This brings us to the second consideration.