Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
How Predictive AI Will Solve GenAI’s Deadly Reliability Problem
  Originally published in Forbes Generative AI too unreliable to...
5 Ways To Hybridize Predictive AI And Generative AI
  Originally published in Forbes AI is in trouble. Both...
This Simple Arithmetic Can Optimize Your Main Business Operations
 Originally published in Forbes Deep down, we all know that...
Predictive AI Usually Fails Because It’s Not Usually Valuated
 Originally published in Forbes Why in the world would the...
SHARE THIS:

5 years ago
How to Make Artificial Intelligence Less Biased

 
Originally published in The Wall Street Journal, Nov 3, 2020.

AI systems can unfairly penalize certain segments of the population—especially women and minorities. Researchers and tech companies are figuring out how to address that.

As artificial intelligence spreads into more areas of public and private life, one thing has become abundantly clear: It can be just as biased as we are.

AI systems have been shown to be less accurate at identifying the faces of dark-skinned women, to give women lower credit-card limits than their husbands, and to be more likely to incorrectly predict that Black defendants will commit future crimes than whites. Racial and gender bias has been found in job-search ads, software for predicting health risks and searches for images of CEOs.

How could this be? How could software designed to take the bias out of decision making, to be as objective as possible, produce these kinds of outcomes? After all, the purpose of artificial intelligence is to take millions of pieces of data and from them make predictions that are as error-free as possible.

But as AI has become more pervasive—as companies and government agencies use AI to decide who gets loans, who needs more health care and how to deploy police officers, and more—investigators have discovered that focusing just on making the final predictions as error free as possible can mean that its errors aren’t always distributed equally. Instead, its predictions can often reflect and exaggerate the effects of past discrimination and prejudice.

In other words, the more AI focused on getting only the big picture right, the more it was prone to being less accurate when it came to certain segments of the population—in particular women and minorities. And the impact of this bias can be devastating on swaths of the population—for instance, denying loans to creditworthy women much more frequently than denying loans to creditworthy men.

To continue reading this article, click here.

One thought on “How to Make Artificial Intelligence Less Biased

  1. AI bias disproportionately affects women and minorities, reinforcing existing inequalities. Addressing this requires diverse training data, transparent algorithms, and inclusive development teams. Ethical frameworks and gender-responsive policies can help create fairer AI systems. A crucial challenge worth solving! dog growling

     

Leave a Reply