Artificial Intelligence (AI) systems have been involved in numerous scandals in recent years. For instance, take the COMPAS recidivism algorithm. The algorithm evaluated the likelihood that defendants will commit another crime in the future. It was widely used in the US criminal justice system to inform decisions about who can be set free at all stages of the process. In 2016, ProPublica exposed that COMPAS’s predictions were biased: its mistakes favored white over black defendants. Black defendants were twice as likely to be labeled as high risk to reoffend but not actually reoffend. White defendants were 1.67 more likely to be labeled as low risk to reoffend but go on to reoffend. Examples of AI systems that are concerning for similar and other reasons abound. What can we do to mitigate the potential adverse effects of AI systems and harness their power to create positive impacts?
A reasonable first step is to articulate the harms we wish to avoid and the positive impacts we hope to attain. Many organizations engage in such activities, and the typical result is a set of principles, often called “AI ethics principles”. For example, Google states that it believes that AI systems should: (1) Be socially beneficial; (2) Avoid creating or reinforcing unfair bias; (3) Be built and tested for safety; (4) Be accountable to people; (5) Incorporate privacy design principles; (6) Uphold high standards of scientific excellence; and (7) Be made available for uses that accord with these principles.