High-Stakes AI Decisions Need to Be Automatically Audited

WIRED | 7/18/2019 | Oren Etzioni,Michael Li
shankay (Posted by) Level 3
Click For Photo: https://media.wired.com/photos/5d2f7f95c6a4c800082b2c53/191:100/pass/OpEd-AI-Bias-1153009578.jpg

Today’s AI systems make weighty decisions regarding loans, medical diagnoses, parole, and more. They're also opaque systems, which makes them susceptible to bias. In the absence of transparency, we will never know why a 41-year-old white male and an 18-year-old black woman who commit similar crimes are assessed as “low risk” versus “high risk” by AI software.

Oren Etzioni is CEO of the Allen Institute for Artificial Intelligence and a professor in the Allen School of Computer Science at the University of Washington. Tianhui Michael Li is founder and president of Pragmatic Data, a data science and AI training company. He formerly headed monetization data science at Foursquare and has worked at Google, Andreessen Horowitz, J.P. Morgan, and D.E. Shaw.

Business - Reasons - Explanations - AI - Decisions

For both business and technical reasons, automatically generated, high-fidelity explanations of most AI decisions are not currently possible. That's why we should be pushing for the external audit of AI systems responsible for high-stakes decision making. Automated auditing, at a massive scale, can systematically probe AI systems and uncover biases or other undesirable behavior patterns.

One of the most notorious instances of black-box AI bias is software used in judicial systems across the country to recommend sentencing, bond amounts, and more. ProPublica’s analysis of one of the most widely used recidivism algorithms for parole decisions uncovered potentially significant bias and inaccuracy. When probed for more information, the creator would not share specifics of their proprietary algorithm. Such secrecy makes it difficult for defendants to challenge these decisions in court.

AI - Bias - Contexts - Bot - Asians

AI bias has been reported in numerous other contexts, from a cringeworthy bot that tells Asians to “open their eyes” in passport photos to facial recognition systems that are less accurate in identifying dark-skinned and female faces to AI recruiting tools that discriminate against women.

In response, regulators have sought to mandate transparency through so-called "explainable...
(Excerpt) Read more at: WIRED
Wake Up To Breaking News!
Sorry Mr. Franklin, we couldn't keep it.
Sign In or Register to comment.

Welcome to Long Room!

Where The World Finds Its News!