Should Artificial Intelligence (AI) Take Over the Decision-Making Process? Artificial intelligence (AI) is increasingly being used in sensitive areas such as healthcare, the criminal justice system, and hiring processes. Courts today use AI systems to assess offenders’ risk of recidivism and the likelihood of flight for offenders awaiting trial. One such...
Should Artificial Intelligence (AI) Take Over the Decision-Making Process?
Artificial intelligence (AI) is increasingly being used in sensitive areas such as healthcare, the criminal justice system, and hiring processes. Courts today use AI systems to assess offenders’ risk of recidivism and the likelihood of flight for offenders awaiting trial. One such algorithm is the Arnold Foundation algorithm, which uses 1.5 million criminal cases to predict defendant’s behaviors during pretrial hearings (Zavrsnik, 2020). With such advancements, there is a growing debate on whether AI should fully replace human decision-making due to its ability to eliminate human bias. Sources have shown that contrary to popular belief, AI decisions are not always less biased than human systems. For instance, a 2016 investigation by ProPublica showed that data–driven AI used by courts to assess the risk of recidivism was biased against minorities and people of color (Silberg & Manyika, 2019). In the UK, a computer program that was used to determine which applicants would be invited for interviews into medical school was found to be biased against female applicants and those with non-European names (Silberg & Manyika, 2019). This text analyzes the potential causes of such biases in the use of AI and identifies what decision-making is likely to look like in the future.
Potential Causes of Bias in AI Use
Sources have identified several causes of algorithmic bias in AI systems. Algorithmic bias could be introduced into the data through pre-existing cultural, social, and institutional expectations that perpetuate historical or societal inequities (Silberg & Manyika, 2019). For instance, data may be fed using words that exhibit gender stereotypes that characterize society. A hiring algorithm designed to favor words such as ‘captured’ or ‘executed’, is, for instance, more likely to be biased against female applicants as such words as more common among men’s applications than those of female applicants (Silberg & Manyika, 2019). One algorithm that exhibited this kind of bias was that developed by Amazon engineers to aid in the company’s recruitment (Lee, Resnick, & Barton, 2019). The algorithm was designed to recognize word patterns in applicants’ resumes rather than relevant skills. The AI software penalized any resume that contained the word ‘women’ in the text and downgraded applicants who attended women colleges, giving rise to gender bias (Lee, Resnick, & Barton, 2019). If historical biases are factored into the data, the resultant model will make the same kinds of wrong judgments that people do.
Bias may also be introduced into the data through the data collection techniques employed. For instance, a financial algorithm developed through sampling techniques that underrepresents certain minority groups could lead to models that have lower approval rates for these minority groups, leading to bias (Silberg & Manyika, 2019).
Another potential cause of AI bias is technical limitations in the design of the algorithm itself (Silberg & Manyika, 2019). Owing to its technical limitations, an algorithm may pick up on statistical correlations that are either illegal or societally unacceptable. For instance, a mortgage lending algorithm may find that older individuals have higher risks of default and may consequently reduce lending with increases in age. If the algorithm recommends loans to younger applicants but denies older applicants based solely on this criterion, and the behavior is repeated across multiple occurrences; the algorithm would be described as biased against older loan applicants.
Bias may also arise if an algorithm is used in unanticipated contexts or with an audience who were not considered in its initial design. In Sweeney and Latanya’s research on racial differences in online advertisements, searches for names common among African-Americans were found to result in more ads with the word ‘arrest’ than searches for names common among whites (Silberg & Manyika, 2019). The researchers hypothesized that even if versions of the ad with and without the word ‘arrest’ were initially displayed equally, users may have clicked on different versions more frequently for different searches, leading the algorithm to display the same more frequently (Silberg & Manyika, 2017). In this case, the algorithm exhibits bias against African-Americans because it is used in a context other than that for which it was anticipated – the anticipated use of the algorithm was marketing and not assessing arrest rates based on race. Algorithms react to billions of user actions every day, making this a significant source of bias.
Decision-Making in the Future
The primary question in analyzing the future of AI is whether there are situations when decision-making could be fully automated. Silberg and Manyika (2017) posit that as long as the potential sources of algorithmic bias have not been fully addressed, it is unlikely that AI will fully replace human decision-making in future. Some sources of bias such as those related to the way data is collected require human judgment (Silberg & Manyika, 2017). As such, the best strategy for decision-making in the future is to combine both human decision-making and AI. It is important, therefore, to consider where human judgment is needed in the decision-making process (and in what form) and where AI systems are needed (Silberg & Manyika, 2017).
As a best practice, sources recommend that proper audits and impact assessments are conducted to check for fairness and the risk of bias before AI systems are deployed (Silberg & Manyika, 2017). Further, there may be a need for experts to collaborate so as to review AI systems on an ongoing basis and suggest frameworks that may help to improve fairness (Silberg & Manyika, 2017).
There is also a need to raise the standard for human decision-making to minimize the risk of human bias. Silberg and Manyika (2017) suggest rethinking the standards used to evaluate whether human decisions are fair and when they increase the risk of bias. One way to increase the credibility of human decision-making is by using AI to examine human biases by running algorithms alongside human decision-makers, comparing results, and identifying potential explanations for differences (Silberg & Manyika, 2017). In this regard, if an organization discovers that an algorithm founded on human decisions exhibits bias, it should not just discontinue the use of the algorithm, but work on changing the underlying human behaviors as well (Silberg & Manyika, 2017). This will ensure that human systems are held accountable in the same way as AI systems.
References
Lee, N., Resnick, P., & Barton, G. (2019). Algorithmic Bias Detection nd Mitigation: Best Practices nd Policies to Reduce Consumer Harms. Brookings Community. Retrieved from https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/
Silberg, J., & Manyika, J. (2019). Notes from the AI Frontier: Tackling Bias in AI (and in Humans). McKinsey Global Institute. Retrieved from https://www.mckinsey.com/featured-insights/artificial-intelligence/tackling-bias-in-artificial-intelligence-and-in-humans
Zavrsnik, A. (2020). Criminal Justice, Artificial Intelligence Systems and Human Rights. ERA Forum, 20(1), 567-83.
The remaining sections cover Conclusions. Subscribe for $1 to unlock the full paper, plus 130,000+ paper examples and the PaperDue AI writing assistant — all included.
Always verify citation format against your institution's current style guide.