Predictive Analytics Times
Predictive Analytics Times
EXCLUSIVE HIGHLIGHTS
Machine Learning and Artificial Intelligence: Not New Concepts for the Data Science Practitioner
 Economic disruption is a reality which has been a...
Interview: The Institute of Business Forecasting & Planning Talks to Dr. Eric Siegel
  Dr. Eric Siegel cuts through the buzzwords surrounding...
Investment Modeling Grounded In Data Science
 For more from Dr. Elder, join Predictive Analytics World...
Some Thoughts On Being a Data Science Entrepreneur in a Disruptive Economy
 The movie “Being There” may seem like an odd...
SHARE THIS:

1 week ago
How Machine Learning Pushes Us to Define Fairness

 

Originally published in Harvard Business Review, November 6, 2019.

Bias is machine learning’s original sin. It’s embedded in machine learning’s essence: the system learns from data, and thus is prone to picking up the human biases that the data represents. For example, an ML hiring system trained on existing American employment is likely to “learn” that being a woman correlates poorly with being a CEO.

Cleaning the data so thoroughly that the system will discover no hidden, pernicious correlations can be extraordinarily difficult. Even with the greatest of care, an ML system might find biased patterns so subtle and complex that they hide from the best-intentioned human attention. Hence the necessary current focus among computer scientists, policy makers, and anyone concerned with social justice on how to keep bias out of AI.

Yet machine learning’s very nature may also be bringing us to think about fairness in new and productive ways. Our encounters with machine learning (ML) are beginning to  give us concepts, a vocabulary, and tools that enable us to address questions of bias and fairness more directly and precisely than before.

We have long taken fairness as a moral primitive. If you ask someone for an example of unfairness, the odds are surprisingly high that they’ll talk about two children who receive different numbers of cookies. That’s clearly unfair, unless there is some relevant difference between them that justifies the disparity: one of the children is older and bigger, or agreed to do extra chores in return for a cookie, etc. In this simple formulation, fairness gets defined as the equal treatment of people unless there is some relevant distinction that justifies unequal treatment.

But what constitutes a “relevant distinction”? The fact is that we agree far more easily about what is unfair than what is fair. We may all agree that racial discrimination is wrong, yet sixty years later we’re still arguing about whether Affirmative Action is a fair remedy.

To continue reading this article, click here.

One thought on “How Machine Learning Pushes Us to Define Fairness

  1. Pingback: How Machine Learning Pushes Us to Define Fairness – The Predictive Analytics Times | Affirmative Action

Leave a Reply

Pin It on Pinterest

Share This