Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
What’s In Your Basket?
 You’re finally upgrading to a top of the line...
8 Things That Algorithms Can Do Better Than Humans
 We humans like to think that we are at...
Industrial Asset Optimization: Connecting Machines Directly with Data Scientists
 For more from this author, attend his virtual presentation,...
Data Science Strategies for Banks and Credit Unions During COVID-19 and Beyond
 For more information on this topic, attend the virtual...
SHARE THIS:

4 days ago
An Algorithm That ‘Predicts’ Criminality Based on a Face Sparks a Furor

 
Originally published in Wired.com, June 24, 2020

Its creators said they could use facial analysis to determine if someone would become a criminal. Critics said the work recalled debunked “race science.”

In early May, a press release from Harrisburg University claimed that two professors and a graduate student had developed a facial-recognition program that could predict whether someone would be a criminal. The release said the paper would be published in a collection by Springer Nature, a big academic publisher.

With “80 percent accuracy and with no racial bias,” the paper, A Deep Neural Network Model to Predict Criminality Using Image Processing, claimed its algorithm could predict “if someone is a criminal based solely on a picture of their face.” The press release has since been deleted from the university website.

Tuesday, more than 1,000 machine-learning researchers, sociologists, historians, and ethicists released a public letter condemning the paper, and Springer Nature confirmed on Twitter it will not publish the research.

But the researchers say the problem doesn’t stop there. Signers of the letter, collectively calling themselves the Coalition for Critical Technology (CCT), said the paper’s claims “are based on unsound scientific premises, research, and methods which … have [been] debunked over the years.” The letter argues it is impossible to predict criminality without racial bias, “because the category of ‘criminality’ itself is racially biased.”

Advances in data science and machine learning have led to numerous algorithms in recent years that purport to predict crimes or criminality. But if the data used to build those algorithms is biased, the algorithms’ predictions will also be biased. Because of the racially skewed nature of policing in the US, the letter argues, any predictive algorithm modeling criminality will only reproduce the biases already reflected in the criminal justice system.

To continue reading this article click here.

Leave a Reply

Pin It on Pinterest

Share This