Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
PAW Preview Video: Piotr Wygocki, Ph.D., CEO & Co-Founder at MIM Solutions
 In anticipation of his upcoming presentation at Predictive Analytics...
PAW Preview Video: James Taylor, Decision Management Solutions
 In anticipation of his upcoming presentation at Predictive Analytics...
Wise Practitioner – Predictive Analytics Interview Series: Oscar Porto and Fábio Ferraretto at DHAUZ
 In anticipation of their upcoming presentation at Predictive Analytics...
PAW Preview Video: Evan Wimpey, Director of Strategic Analytics at Elder Research
 In anticipation of his upcoming presentation at Predictive Analytics...
SHARE THIS:

5 months ago
Facebook Says Its New AI Can Identify More Problems Faster

 
Originally published in Wired, Dec 8, 2021.

The “Few-Shot Learner” system doesn’t need to see as many examples to identify troublesome posts, and it works in more than 100 languages.

A recent trove of documents leaked from Facebook demonstrated how the social network struggles to moderate dangerous content in places far from Silicon Valley. Internal discussions revealed worries that moderation algorithms for the languages spoken in Pakistan and Ethiopia were insufficient, and that the company lacked adequate training data to tune systems to different dialects of Arabic.

Meta Platforms, Facebook’s owner, now says it has deployed a new artificial intelligence moderation system for some tasks that can be adapted to new enforcement jobs more quickly than its predecessors because it requires much less training data. The company says the system, called Few-Shot Learner, works in more than 100 languages and can operate on images as well as text.

Facebook says Few-Shot Learner makes it possible to automate enforcement of a new moderation rule in about six weeks, down from around six months. The company says the system is helping to enforce a rule introduced in September banning posts likely to discourage people from getting Covid-19 vaccines—even if the posts don’t flatly lie. Facebook also says Few-Shot Learner, first deployed earlier this year, contributed to a decline it recorded in the worldwide prevalence of hate speech from mid-2020 through October this year, but it has not released details of the new system’s performance.

The new system won’t solve all of Facebook’s content challenges, but it’s an example of how deeply the company relies on AI to tackle them. Facebook grew to span the globe claiming it would bring people together—but its network has also incubated hate, harassment, and, according to the United Nations, contributed to genocide against Rohingya Muslims in Myanmar. The company has long said AI is the only practical way to monitor its vast network, but despite recent advances the technology is a long way short of being able to understand the nuances of human communication. Facebook said recently that it has automated systems to find hate speech and terrorism content in more than 50 languages—but the service is used in more than 100 languages.

To continue reading this article, click here.

Leave a Reply