Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
Survey: Machine Learning Projects Still Routinely Fail to Deploy
 Originally published in KDnuggets. Eric Siegel highlights the chronic...
Three Best Practices for Unilever’s Global Analytics Initiatives
    This article from Morgan Vawter, Global Vice...
Getting Machine Learning Projects from Idea to Execution
 Originally published in Harvard Business Review Machine learning might...
Eric Siegel on Bloomberg Businessweek
  Listen to Eric Siegel, former Columbia University Professor,...
SHARE THIS:

6 years ago
Blatantly Discriminatory Machines: When Algorithms Explicitly Penalize

 Originally published in The San Francisco Chronicle (the cover article of Sunday’s “Insight” section) What if the data tells you to be racist? Without the right precautions, machine learning — the technology that drives risk-assessment in law enforcement, as well as hiring and loan decisions — explicitly penalizes underprivileged groups. Left to its own devices, the algorithm will count a black defendant’s race as a strike against them. Yet some data scientists are calling to turn off the safeguards and unleash computerized prejudice, signaling an emerging threat that supersedes the well-known concerns about inadvertent machine bias. Imagine sitting

This content is restricted to site members. If you are an existing user, please log in on the right (desktop) or below (mobile). If not, register today and gain free access to original content and industry news. See the details here.

Comments are closed.