Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
Three Best Practices for Unilever’s Global Analytics Initiatives
    This article from Morgan Vawter, Global Vice...
Getting Machine Learning Projects from Idea to Execution
 Originally published in Harvard Business Review Machine learning might...
Eric Siegel on Bloomberg Businessweek
  Listen to Eric Siegel, former Columbia University Professor,...
Effective Machine Learning Needs Leadership — Not AI Hype
 Originally published in BigThink, Feb 12, 2024.  Excerpted from The...
SHARE THIS:

3 years ago
How to Make Artificial Intelligence Less Biased

 
Originally published in The Wall Street Journal, Nov 3, 2020.

AI systems can unfairly penalize certain segments of the population—especially women and minorities. Researchers and tech companies are figuring out how to address that.

As artificial intelligence spreads into more areas of public and private life, one thing has become abundantly clear: It can be just as biased as we are.

AI systems have been shown to be less accurate at identifying the faces of dark-skinned women, to give women lower credit-card limits than their husbands, and to be more likely to incorrectly predict that Black defendants will commit future crimes than whites. Racial and gender bias has been found in job-search ads, software for predicting health risks and searches for images of CEOs.

How could this be? How could software designed to take the bias out of decision making, to be as objective as possible, produce these kinds of outcomes? After all, the purpose of artificial intelligence is to take millions of pieces of data and from them make predictions that are as error-free as possible.

But as AI has become more pervasive—as companies and government agencies use AI to decide who gets loans, who needs more health care and how to deploy police officers, and more—investigators have discovered that focusing just on making the final predictions as error free as possible can mean that its errors aren’t always distributed equally. Instead, its predictions can often reflect and exaggerate the effects of past discrimination and prejudice.

In other words, the more AI focused on getting only the big picture right, the more it was prone to being less accurate when it came to certain segments of the population—in particular women and minorities. And the impact of this bias can be devastating on swaths of the population—for instance, denying loans to creditworthy women much more frequently than denying loans to creditworthy men.

To continue reading this article, click here.

Leave a Reply