Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
PAW Preview Video: Evan Wimpey, Director of Strategic Analytics at Elder Research
 In anticipation of his upcoming presentation at Deep Learning...
Podcast: P-Hacking—How to Know Your Predictive Discovery is Conclusive
  Welcome to the next episode of The Machine Learning...
PAW Preview Video: Piotr Wygocki, Ph.D., CEO & Co-Founder at MIM Solutions
 In anticipation of his upcoming presentation at Predictive Analytics...
PAW Preview Video: James Taylor, Decision Management Solutions
 In anticipation of his upcoming presentation at Predictive Analytics...
SHARE THIS:

5 months ago
Sharing Learnings From the First Algorithmic Bias Bounty Challenge

 
Originally posted on Twitter Engineering, Sept 21, 2021.

In October 2020, people on Twitter raised concerns that the saliency model we used to crop images didn’t serve all people equitably. Shortly thereafter, we published our algorithmic bias assessment which confirmed the model was not treating all people fairly. We committed to decrease our reliance on ML-based image cropping since the decision to crop an image is best made by people. In May 2021 we began rolling out changes to how images appear on Twitter. Now, standard aspect ratio photos appear uncropped on the Home Timeline on mobile, and we are working on further improvements that build on this initial effort.

But our work didn’t stop there. In August 2021, we held the first algorithmic bias bounty challenge and invited the ethical AI hacker community to take apart our algorithm to identify additional bias and other potential harms within it. The results of their findings confirmed our hypothesis: we can’t solve these challenges alone, and our understanding of bias in AI can be improved when diverse voices are able to contribute to the conversation.

In this post, we’ll share what we learned through creating and hosting this challenge, what the submissions taught us, and what’s next. We believe it’s critical we start a dialogue and encourage community-led, proactive surfacing and mitigation of algorithmic harms before they reach the public. We are hopeful that bias bounties can be an important tool going forward for companies to solicit feedback and understand potential problems.

To continue reading this article, click here.

Leave a Reply