Facebook tackles coronavirus misinformation, hateful memes with AI
Understanding the relationship between words and images isn't easy for machines.
Queenie WongFormer Senior Writer
Queenie Wong was a senior writer for CNET News, focusing on social media companies including Facebook's parent company Meta, Twitter and TikTok. Before joining CNET, she worked for The Mercury News in San Jose and the Statesman Journal in Salem, Oregon. A native of Southern California, she took her first journalism class in middle school.
ExpertiseI've been writing about social media since 2015 but have previously covered politics, crime and education. I also have a degree in studio art.Credentials
Facebook has been doubling down on artificial intelligence to detect coronavirus misinformation and hate speech, but the social network is finding machines can have a tough time identifying offensive content online.
On Tuesday, the world's largest social network laid out several challenges its AI systems face when trying to find copies of posts that contain coronavirus misinformation or detect hateful memes. Like other social networks, Facebook uses a mix of human reviewers and technology to detect content that violates its rules before users report it. While AI has made progress, misinformation and hate speech keep resurfacing on Facebook and other social networks.
The stakes are high because misinformation about COVID-19, the respiratory illness caused by the coronavirus, can lead to someone endangering their health. Hoaxes about how drinking bleach can cure the coronavirus or wearing a mask can make you sick continue to pop up on social media despite efforts to stop its spread. Similarly, online hate speech can fuel violence in the real world. Facebook has faced criticism that it didn't do enough to combat hate speech linked to a genocide in Myanmar against the Rohingya, a mainly Muslim group.
The reliance on AI comes as the COVID-19 pandemic has prompted Facebook to shift content review work to a smaller number of full-time employees at first. The social network still relies on contractors, many of whom work from home. The content review team is prioritizing posts that could cause the most harm, including coronavirus misinformation, child safety, suicide and self-injury.
"Our effectiveness has certainly been impacted by having less human review during COVID-19," CEO Mark Zuckerberg said during a call. "We do unfortunately expect to make more mistakes until we're able to ramp everything back up."
Facebook Chief Technology Officer Mike Schroepfer acknowledged that AI won't be a cure-all for every content moderation issue.
"These problems are fundamentally human problems about life and communication," Schroepfer said. "So we want humans in control and making the final decisions, especially when the problems are nuanced." With nearly 2.6 billion monthly active users, Facebook sees AI as a tool that can take the "drudgery out" of tasks that would take humans a lot of time to complete.
Finding copies of coronavirus misinformation
Facebook has been pulling down harmful coronavirus misinformation and works with more than 60 fact-checking organizations, including the Associated Press and Reuters, to review content on the social network.
In April, Facebook put warning labels on about 50 million posts related to COVID-19. Since March, Facebook has removed more than 2.5 million posts about the sale of masks, sanitizers, surface disinfecting wipes and COVID-19 test kits -- items the social network temporarily banned to prevent price gouging and other types of exploitation.
Detecting copies of posts that contain misinformation can be difficult because people sometimes alter an image with augmented reality filters. Pixels that make up an image can also change when a user takes take a screenshot. Two images can look identical but contain different words.
"These are difficult challenges, and our tools are far from perfect," Facebook said in a blog post. "Furthermore, the adversarial nature of these challenges means the work will never be done."
In one example, Facebook showed three identical images of toilet paper with a breaking news headline. One is a screenshot so the pixels are different than in the original shot. Another includes the headline "COVID-19 isn't found in toilet paper" while the two others contains misinformation stating that "COVID-19 is found in toilet paper."
When a fact-checker flags a post as false, Facebook will show it lower on a user's News Feed and include a warning notice. Taking down this content, though, can be a game of whack-a-mole because thousands of copies can resurface on the site.
Using a tool called SimSearchNet, Facebook can identify these copies by matching them against a database of images that contain misinformation.
Facebook posts promoting the sale of items the social network temporarily banned such as masks and hand sanitizer can be tricky to detect when an image is cropped or altered in another way. Facebook says it has another database and system that help the company detect ads that users change to evade detection.
On Marketplace, a Facebook feature that lets users buy and sell goods, people take photos of items against unusual backgrounds, with odd lighting and at strange angles. Facebook said it was able to improve detection of banned goods by using data such as public images of masks and hand sanitizer, along with photos that look like these products. Facebook is trying to train its AI systems to understand the key element in the photo even if there's a different background, Schroepfer said.
The system, though, hasn't been perfect. People creating hand-sewn masks have been flagged by Facebook's automated content moderation systems, according to a report by The New York Times.
Proactively detecting hate speech
Facebook said it has made strides in detecting hate speech before a user reports it.
In the first three months of 2020, AI could proactively detect nearly 88.8% of the hate speech Facebook removed, up from 80.2% in the fourth quarter, according to a community standards enforcement report the social network released on Tuesday. The company took action on 9.6 million pieces of content for hate speech in the first quarter, up from 3.9 million in the previous quarter.
The social network attributed this uptick to new technologies that help machines develop a deeper understanding of the meaning of different words. Facebook defines hate speech as a direct attack on people based on "protected characteristics," such as race, sexual orientation and disability. The company also created a system so machines can better understand the relationship between images and words.
Facebook uses techniques to match images and text that are identical to ones that have already been removed from the social network. It also improved its "machine-learning classifiers" that are used to assess whether text and reactions could be hate speech. The company relies on a technique called self-supervised training so it doesn't have to retrain its systems to detect hate speech in different languages.
Hate speech can be tough for AI to detect because there are nuances and cultural context involved. Some people have reclaimed slurs and others use offensive language on Facebook to denounce its use. Users have tried to evade detection by misspelling words or avoiding certain phrases. A "substantial" amount of hate speech is included in videos and images on Facebook.
"Even expert human reviewers can sometimes struggle to distinguish a cruel remark from something that falls under the definition of hate speech or miss an idiom that isn't widely used," Facebook said in a blog post.
Memes that contain hate speech, for example, are especially challenging because machines have to grasp the connection between words and images. For example, a hateful meme could contain an image of a tombstones and the words "Everyone in your ethnic group belongs here." Viewed separately, the images and words might not violate Facebook's rules. But when put them together, they create a hateful message.
Schroepfer couldn't say whether Facebook has seen an uptick in hate speech directed at Asians because of the coronavirus pandemic.
The company, though, has seen a huge change in behavior across the social network because of the pandemic.
"One of the challenges of hate speech in general is that it changes and it is contextual based on you know current events and what's going on," he said.
On Tuesday, Facebook also released a data set with more than 10,000 examples of hateful memes so researchers could help the social network improve its detection of hate speech.
The company also launched a new competition called the hateful meme challenge that includes a $100,000 prize pool. Hosted by DrivenData, the challenge's participants will create models trained on the hateful memes data set.