Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
Data Analytics in Higher Education
 Universities confront many of the same marketing challenges as...
How Generative AI Helps Predictive AI
 Originally published in Forbes, August 21, 2024 This is the...
4 Ways Machine Learning Can Perpetuate Injustice and What to Do About It
 Originally published in Built In, July 12, 2024 When ML...
The Great AI Myth: These 3 Misconceptions Fuel It
 Originally published in Forbes, July 29, 2024 The hottest thing...
SHARE THIS:

4 years ago
How to Make Artificial Intelligence Less Biased

 
Originally published in The Wall Street Journal, Nov 3, 2020.

AI systems can unfairly penalize certain segments of the population—especially women and minorities. Researchers and tech companies are figuring out how to address that.

As artificial intelligence spreads into more areas of public and private life, one thing has become abundantly clear: It can be just as biased as we are.

AI systems have been shown to be less accurate at identifying the faces of dark-skinned women, to give women lower credit-card limits than their husbands, and to be more likely to incorrectly predict that Black defendants will commit future crimes than whites. Racial and gender bias has been found in job-search ads, software for predicting health risks and searches for images of CEOs.

How could this be? How could software designed to take the bias out of decision making, to be as objective as possible, produce these kinds of outcomes? After all, the purpose of artificial intelligence is to take millions of pieces of data and from them make predictions that are as error-free as possible.

But as AI has become more pervasive—as companies and government agencies use AI to decide who gets loans, who needs more health care and how to deploy police officers, and more—investigators have discovered that focusing just on making the final predictions as error free as possible can mean that its errors aren’t always distributed equally. Instead, its predictions can often reflect and exaggerate the effects of past discrimination and prejudice.

In other words, the more AI focused on getting only the big picture right, the more it was prone to being less accurate when it came to certain segments of the population—in particular women and minorities. And the impact of this bias can be devastating on swaths of the population—for instance, denying loans to creditworthy women much more frequently than denying loans to creditworthy men.

To continue reading this article, click here.

Leave a Reply