Originally published in Harvard Business Review, November 6, 2019.
Bias is machine learning’s original sin. It’s embedded in machine learning’s essence: the system learns from data, and thus is prone to picking up the human biases that the data represents. For example, an ML hiring system trained on existing American employment is likely to “learn” that being a woman correlates poorly with being a CEO.
Cleaning the data so thoroughly that the system will discover no hidden, pernicious correlations can be extraordinarily difficult. Even with the greatest of care, an ML system might find biased patterns so subtle and complex that they hide from the best-intentioned human attention. Hence the necessary current focus among computer scientists, policy makers, and anyone concerned with social justice on how to keep bias out of AI.
Yet machine learning’s very nature may also be bringing us to think about fairness in new and productive ways. Our encounters with machine learning (ML) are beginning to give us concepts, a vocabulary, and tools that enable us to address questions of bias and fairness more directly and precisely than before.
We have long taken fairness as a moral primitive. If you ask someone for an example of unfairness, the odds are surprisingly high that they’ll talk about two children who receive different numbers of cookies. That’s clearly unfair, unless there is some relevant difference between them that justifies the disparity: one of the children is older and bigger, or agreed to do extra chores in return for a cookie, etc. In this simple formulation, fairness gets defined as the equal treatment of people unless there is some relevant distinction that justifies unequal treatment.
But what constitutes a “relevant distinction”? The fact is that we agree far more easily about what is unfair than what is fair. We may all agree that racial discrimination is wrong, yet sixty years later we’re still arguing about whether Affirmative Action is a fair remedy.
To continue reading this article, click here.