Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
MLW Preview Video: Devanshi Vyas, Co-Founder at Censius
 In anticipation of her upcoming presentation at Deep Learning...
MLW Preview Video: Ayush Patel, Co-Founder at Twelvefold
 In anticipation of his upcoming presentation at Predictive Analytics...
MLW Preview Video: Sarah Kalicin, Data Scientist at Intel Corporation
 In anticipation of her upcoming keynote presentation at Predictive...
MLW Preview Video: Praneet Dutta, Senior Research Engineer at DeepMind
 In anticipation of his upcoming presentation at Deep Learning...
SHARE THIS:

5 years ago
Blatantly Discriminatory Machines: When Algorithms Explicitly Penalize

 Originally published in The San Francisco Chronicle (the cover article of Sunday’s “Insight” section) What if the data tells you to be racist? Without the right precautions, machine learning — the technology that drives risk-assessment in law enforcement, as well as hiring and loan decisions — explicitly penalizes underprivileged groups. Left to its own devices, the algorithm will count a black defendant’s race as a strike against them. Yet some data scientists are calling to turn off the safeguards and unleash computerized prejudice, signaling an emerging threat that supersedes the well-known concerns about inadvertent machine bias. Imagine sitting

This content is restricted to site members. If you are an existing user, please log in on the right (desktop) or below (mobile). If not, register today and gain free access to original content and industry news. See the details here.

Comments are closed.