X

Facebook parent Meta uses AI to tackle new types of harmful content

The company has been testing new AI technology to flag posts that discourage COVID-19 vaccines or imply violence, which may be harder to catch.

Queenie Wong Former Senior Writer
Queenie Wong was a senior writer for CNET News, focusing on social media companies including Facebook's parent company Meta, Twitter and TikTok. Before joining CNET, she worked for The Mercury News in San Jose and the Statesman Journal in Salem, Oregon. A native of Southern California, she took her first journalism class in middle school.
Expertise I've been writing about social media since 2015 but have previously covered politics, crime and education. I also have a degree in studio art. Credentials
  • 2022 Eddie award for consumer analysis
Queenie Wong
2 min read
gettyimages-1349988105

Facebook has nearly 3 billion monthly active users.

Getty Images

Meta, formerly known as Facebook , said Wednesday it has created artificial intelligence technology that can adapt more quickly to new types of harmful content, including posts discouraging COVID-19 vaccinations.

Generally, AI systems learn new tasks from examples, but the process of gathering and labeling a massive amount of data typically takes months. Using technology Meta calls Few-Shot Learner, the new AI system needs only a small amount of training data so it can adjust to combat new types of harmful content within weeks instead of months.

The social network, for example, has rules against posting harmful COVID-19 vaccine misinformation, including false claims that the vaccine alters DNA. But users sometimes phrase their remarks as a question like "Vaccine or DNA changer?" or even use code words to try to evade detection. The new technology, Meta says, will help the company catch content it might miss.

"If we react faster, then we're able to launch interventions and content moderations in a more timely fashion," Meta Product Manager Cornelia Carapcea said in an interview. "Ultimately, the goal here is to keep users safe."

The creation of this new AI system could help the social network fend off criticism, including from President Joe Biden, that it isn't doing enough to combat misinformation on its platform such as COVID-19 vaccine misinformation. Former Facebook product manager turned whistleblower Frances Haugen and advocacy groups have also accused the company of prioritizing its profits over user safety, especially in developing countries.

Meta said it tested the new system and it was able to identify offensive content that conventional AI systems might not catch. After rolling out the new system on Facebook and its photo-service Instagram, the percentage of views of harmful content users saw decreased, Meta said. Few-Shot Learner works in more than 100 languages. The company didn't list the languages included, but Carapcea said the new technology can "make a big dent" in combating harmful content in languages outside of English, which may have fewer samples to train AI systems. 

As Facebook focuses more on building the metaverse, virtual spaces in which people can socialize and work, content moderation will become more complex. Carapcea said she thinks Few-Shot Learner could eventually be applied to virtual reality content.

"At the end of the day, Few-Shot Learner is a piece of tech that's used specifically for integrity," she said. "But teaching machine learning systems with fewer and fewer examples is very much a topic that's being pushed at the forefront of research."