Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
Video – Alexa On The Edge – A Case Study in Customer-Obsessed Research from Susanj of Amazon
 Event: Machine Learning Week 2021 Keynote: Alexa On The Edge...
Why AI Isn’t Going to Replace Data Scientists Any Time Soon
 Should data scientists consider AI a threat to their...
“Doing AI” Is a Mistake that Detracts from Real Problem-Solving
  A note from Executive Editor Eric Siegel: Richard...
Getting the Green Light for a Machine Learning Project
  This article is based on the transcript of...
SHARE THIS:

4 months ago
Sharing Learnings About Our Image Cropping Algorithm

 

In October 2020, we heard feedback from people on Twitter that our image cropping algorithm didn’t serve all people equitably. As part of our commitment to address this issue, we also shared that we’d analyze our model again for bias. Over the last several months, our teams have accelerated improvements for how we assess algorithms for potential bias and improve our understanding of whether ML is always the best solution to the problem at hand. Today, we’re sharing the outcomes of our bias assessment and a link for those interested in reading and reproducing our analysis in more technical detail.

The analysis of our image cropping algorithm was a collaborative effort together with Kyra Yee and Tao Tantipongpipat from our ML Ethics, Transparency, and Accountability (META) team and Shubhanshu Mishra from our Content Understanding Research team, which specializes in improving our ML models for various types of content in tweets. In our research, we tested our model for gender and race-based biases and considered whether our model aligned with our goal of enabling people to make their own choices on our platform.

How does a saliency algorithm work and where might harms arise? 

Twitter started using a saliency algorithm in 2018 to crop images. We did this to improve consistency in the size of photos in your timeline and to allow you to see more Tweets at a glance. The saliency algorithm works by estimating what a person might want to see first within a picture so that our system could determine how to crop an image to an easily-viewable size. Saliency models are trained on how the human eye looks at a picture as a method of prioritizing what’s likely to be most important to the most people. The algorithm, trained on human eye-tracking data, predicts a saliency score on all regions in the image and chooses the point with the highest score as the center of the crop.

To continue reading this article, click here.

Leave a Reply