Contributions to this post were provided by Alexandra Chouldechova (Carnegie Mellon University), Sampath Kannan (University of Pennsylvania), and Aaron Roth (University of Pennsylvania).
The Computing Community Consortium held a workshop on Fair Representations and Fair Interactive Learning in 2018, which was led by Aaron Roth from University of Pennsylvania and Alexandra Chouldechova from Carnegie Mellon University. A group of 50 industry, academic, and government experts convened in Philadelphia to explore the roots of algorithmic bias. The workshop report has been highlighted on the front page of the May 2020 CACM Issue, which includes a snapshot of the report that interviewed both Roth and Chouldechova.
We tend to believe that algorithmic systems by virtue of their automated, data-driven, behavior are superior to human decision making. Roth and Chouldechova discussed inappropriate leaps in our reasoning. While some have voiced concern about the use of automated decision systems in sensitive domains such as criminal justice and hiring, by and large we tend to trust such systems. Rather than expecting these systems to be trustworthy, however, we should first analyze them rigorously. Many deployed systems have been shown to exhibit bias. Understanding various ways that bias can creep into the machine learning pipeline, even without any malicious intent, is an important challenge. With such an understanding, one may hope to design algorithms that do not have such biases, and detect and correct bias in deployed systems when it does occur.
One way of creating less discriminatory systems is to ensure that they are continually monitored and continually under human oversight. Such hybrid socio-technical systems can potentially improve the process and the outcomes, minimizing harms, or at least distributing them evenly across different populations. These outcomes will depend on both the properties of the algorithms and the human who is taking that information (however it is presented) and incorporating it into the decision-making process.
It is a challenging problem, but one that Roth and Chouldechova think will benefit from collaboration between computer scientists and economic and social scientists, who have long experience thinking about the equilibrium effects of policy interventions.
One of several ideas to help create an ecosystem that fosters the creation of ethical systems is to require that all computer scientists take a class on the ethics and social impact of computational technologies. (This is already a requirement in many universities.) As applications of AI are being deployed at a rapid pace, with algorithms being used to determine many of life’s decisions, an important question is whether the ethics training of computer scientists needs to be broadened and deepened. This question was brought up at the Center for Technology Innovation at Brookings discussion with Michael Kearns and Aaron Roth of the University of Pennsylvania about their new book, “The Ethical Algorithm.”
Kearns and Roth both agreed that while it is ideal if a computer scientist takes an ethics class, it might not solve all our potential problems since a lot of problems created by algorithms may be from unintended consequences. See their new book to learn more.
The CCC has done other work in the Fairness space. In 2019, CCC hosted the Economics and Fairness workshop, which was organized by David Parkes from Harvard University and Rakesh Vohra from University of Pennsylvania, and produced this workshop report. CCC also held a session at AAAS 2020 on New Approaches to Fairness in Automated Decision Making. See the summary blog to learn more about it.