Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
Effective Machine Learning Needs Leadership — Not AI Hype
 Originally published in BigThink, Feb 12, 2024.  Excerpted from The...
Today’s AI Won’t Radically Transform Society, But It’s Already Reshaping Business
 Originally published in Fast Company, Jan 5, 2024. Eric...
Calculating Customer Potential with Share of Wallet
 No question about it: We, as consumers have our...
A University Curriculum Supplement to Teach a Business Framework for ML Deployment
    In 2023, as a visiting analytics professor...
SHARE THIS:

6 years ago
The AI “Gaydar” Study and the Real Dangers of Big Data

 

Originally published in The New Yorker

Editor’s note: Keep in mind that the high “accuracies” such as 81%, as reported by this research up front, are misleading. The model is effective, with a lift around 7 at 10%, but the “accuracy” measure is over a hypothetical sample that is 50/50 positive/negative, which is not the case in the general distribution (sourcing such a sample would require another non-existent model in the first place). However, our intent in including coverage of this work is to share the ethical ramifications rather than the technical performance.

Every face does not tell a story; it tells thousands of them. Over evolutionary time, the human brain has become an exceptional reader of the human face—computerlike, we like to think. A viewer instinctively knows the difference between a real smile and a fake one. In July, a Canadian study reported that college students can reliably tell if people are richer or poorer than average simply by looking at their expressionless faces. Scotland Yard employs a team of “super-recognizers” who can, from a pixelated photo, identify a suspect they may have seen briefly years earlier or come across in a mug shot. But, being human, we are also inventing machines that read faces as well as or better than we can. In the twenty-first century, the face is a database, a dynamic bank of information points—muscle configurations, childhood scars, barely perceptible flares of the nostril—that together speak to what you feel and who you are. Facial-recognition technology is being tested in airports around the world, matching camera footage against visa photos. Churches use it to document worshipper attendance. China has gone all in on the technology, employing it to identify jaywalkers, offer menu suggestions at KFC, and prevent the theft of toilet paper from public restrooms.

“The face is an observable proxy for a wide range of factors, like your life history, your development factors, whether you’re healthy,” Michal Kosinski, an organizational psychologist at the Stanford Graduate School of Business, told the Guardian earlier this week. The photo of Kosinski accompanying the interview showed the face of a man beleaguered. Several days earlier, Kosinski and a colleague, Yilun Wang, had reported the results of a study, to be published in the Journal of Personality and Social Psychology, suggesting that facial-recognition software could correctly identify an individual’s sexuality with uncanny accuracy. The researchers culled tens of thousands of photos from an online-dating site, then used an off-the-shelf computer model to extract users’ facial characteristics—both transient ones, like eye makeup and hair color, and more fixed ones, like jaw shape. Then they fed the data into their own model, which classified users by their apparent sexuality. When shown two photos, one of a gay man and one of a straight man, Kosinski and Wang’s model could distinguish between them eighty-one per cent of the time; for women, its accuracy dropped slightly, to seventy-one per cent. Human viewers fared substantially worse. They correctly picked the gay man sixty-one per cent of the time and the gay woman fifty-four per cent of the time. “Gaydar,” it appeared, was little better than a random guess.

The study immediately drew fire from two leading L.G.B.T.Q. groups, the Human Rights Campaign and glaad, for “wrongfully suggesting that artificial intelligence (AI) can be used to detect sexual orientation.” They offered a list of complaints, which the researchers rebutted point by point. Yes, the study was in fact peer-reviewed. No, contrary to criticism, the study did not assume that there was no difference between a person’s sexual orientation and his or her sexual identity; some people might indeed identify as straight but act on same-sex attraction. “We assumed that there was a correlation . . . in that people who said they were looking for partners of the same gender were homosexual,” Kosinski and Wang wrote. True, the study consisted entirely of white faces, but only because the dating site had served up too few faces of color to provide for meaningful analysis. And that didn’t diminish the point they were making—that existing, easily obtainable technology could effectively out a sizable portion of society. To the extent that Kosinski and Wang had an agenda, it appeared to be on the side of their critics. As they wrote in the paper’s abstract, “Given that companies and governments are increasingly using computer vision algorithms to detect people’s intimate traits, our findings expose a threat to the privacy and safety of gay men and women.”

CONTINUE READING: Access the complete article in The New Yorker, where it was originally published.

Author Bio:

Alan Burdick, a staff writer, joined The New Yorker in 2012, first as a senior editor and then also as the editor of Elements, newyorker.com’s science-and-tech blog. He worked previously as an editor at the Times MagazineDiscover, and OnEarth, and as a writer and producer at the American Museum of Natural History. He has written for magazines, including Harper’s and GQ, and is the author, most recently, of “Why Time Flies: A Mostly Scientific Investigation.” His previous book, “Out of Eden: An Odyssey of Ecological Invasion,” from 2005, was a finalist for the National Book Award and won the Overseas Press Club Award for environmental reporting.

 

Leave a Reply