Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
Diversity and Collaborative Problem Solving to Address Wicked Data Ethics Problems
 The complexity of the ethical issues facing professionals who...
Climate Tech Needs Machine Learning, Says PAW Climate Conference Chair
  Straight from the horse’s mouth – the founding...
Predictive Policing: Six Ethical Predicaments
  Originally published in KDNuggets. This article is based...
Measuring Invisible Treatment Effects with Uplift Analysis
  Models make predictions by identifying consistent correlations in...
SHARE THIS:

2 weeks ago
How to Make Artificial Intelligence Less Biased

 
Originally published in The Wall Street Journal, Nov 3, 2020.

AI systems can unfairly penalize certain segments of the population—especially women and minorities. Researchers and tech companies are figuring out how to address that.

As artificial intelligence spreads into more areas of public and private life, one thing has become abundantly clear: It can be just as biased as we are.

AI systems have been shown to be less accurate at identifying the faces of dark-skinned women, to give women lower credit-card limits than their husbands, and to be more likely to incorrectly predict that Black defendants will commit future crimes than whites. Racial and gender bias has been found in job-search ads, software for predicting health risks and searches for images of CEOs.

How could this be? How could software designed to take the bias out of decision making, to be as objective as possible, produce these kinds of outcomes? After all, the purpose of artificial intelligence is to take millions of pieces of data and from them make predictions that are as error-free as possible.

But as AI has become more pervasive—as companies and government agencies use AI to decide who gets loans, who needs more health care and how to deploy police officers, and more—investigators have discovered that focusing just on making the final predictions as error free as possible can mean that its errors aren’t always distributed equally. Instead, its predictions can often reflect and exaggerate the effects of past discrimination and prejudice.

In other words, the more AI focused on getting only the big picture right, the more it was prone to being less accurate when it came to certain segments of the population—in particular women and minorities. And the impact of this bias can be devastating on swaths of the population—for instance, denying loans to creditworthy women much more frequently than denying loans to creditworthy men.

To continue reading this article, click here.

Leave a Reply