Facebook uses bothand artificial intelligence to combat some of its toughest problems, including hate speech, misinformation and election meddling. Now, the social network is doubling down on AI.
The tech giant has come under fire for a series of lapses, including its failure to pull down a live video of terrorist attack in New Zealand that killed 50 people at. who review posts shared by the social network's 2.3 billion users say they've suffered trauma from repeatedly looking at gruesome and violent content. But AI has also helped Facebook flag spam, fake accounts, nudity and other offensive content before a user reports it to the social network. Overall, AI has had mixed results.
Facebook CTO Mike Schroepfer on Wednesday acknowledged that AI hasn't been a cure-all for the social network's "complex problems," but he said the company was making progress. He made the remarks in a keynote at the company's.
Schroepfer showed the audience photographs ofand broccoli tempura, which look surprisingly similar. Facebook employees, he said, built a new algorithm that can detect differences in similar images, allowing a computer to distinguish which was which.
Schroepfer said similar techniques can be used to help machines recognize other images that might otherwise escape the social network's detection.
"If someone reports something like this," he said, "we can then fan out and look at billions of images in a very short period of time and find things that look similar."
Facebook, which doesn't allow the sale of recreational drugs on its platform, discovered that people tried to work around its system by using packaging or baked goods, such as Rice Krispies treats. The social network can now flag those images by putting together signals like the text in a post, comments and the identity of the user.
"This is an intensely adversarial game," Schroepfer said. "We build a new technique, we deploy it, people work hard to try to figure out ways around this."
Identifying the right images isn't the only AI challenge the company is facing. When the company was building afor its Portal video chat device, Facebook had to make sure the technology wasn't biased and could recognize age, gender and skin tone.
Facebook is also trying to train its computers to learn with less supervision in order to tacklein elections.
But as the social network uses AI to moderate more content, it also has to balance concerns that it's being fair to all groups. Facebook, for example, has been accused of suppressing conservative speech, but the company has denied those allegations. And people might disagree about what's considered hate speech or misinformation.
Facebook data scientist Isabel Kloumann said in an interview that when the company is determining what is hate speech the identity of the person could be an important factor along with who they're targeting. At the same time, Facebook has to balance safety concerns with whether they're treating groups of people equally.
"We don't have a silver bullet for this," she said. "But the fact that we're having this conversation is the most important thing."
Originally published May 1, 1:46 p.m. PT
Update, 5:19 p.m.: Adds comments from Facebook data scientist and more background.