Everything Is Not Terminator: Value-Based Regulation of Artificial Intelligence

Headshot - John Weaver
John F. Weaver
Director, Corporate Department and Chair, Real Estate Practice Group and Chair, Artificial Intelligence Practice
Published: Journal of Robotics, Artificial Intelligence & Law
April 5, 2019

Last fall, Reuters reported that Amazon had developed a hiring tool that used artificial intelligence to review job candidates to make hiring decisions, but that the program discriminated against women.  Although Amazon ultimately abandoned the AI application as a mechanism to autonomously hire staff, that program represented one of the worst-case scenarios for artificial intelligence: inherent bias or discriminatory preferences baked into the AI that tainted all of the decisions and analysis performed by the AI. This problem is not occurring infrequently. A 2016 analysis of an AI risk assessment software used to determine the probability that a criminal defendant will re-offend revealed that the software disproportionately identified white offenders as a lower risk than black offenders even though their criminal histories displayed higher probabilities to re-offend. Similarly, researchers have expressed concern that AI used to review loan applications will impermissibly rely on race by drawing connections between geographic information (which is relevant to the lender’s decision) and the ethnic background of the people known to live there (which is not). Compounding the potential for discriminatory action is the “black box” problem: companies that develop AI programs are typically reluctant to let consumers and regulators review their code, resulting in an algorithmic black box in which decisions are made, but no one knows how or why.

To read the full article, please click here.