Facebook, Microsoft and other partners create challenge to detect deepfakes

Manipulated videos are a big concern ahead of the 2020 US elections.

Queenie Wong Former Senior Writer
Queenie Wong was a senior writer for CNET News, focusing on social media companies including Facebook's parent company Meta, Twitter and TikTok. Before joining CNET, she worked for The Mercury News in San Jose and the Statesman Journal in Salem, Oregon. A native of Southern California, she took her first journalism class in middle school.
Expertise I've been writing about social media since 2015 but have previously covered politics, crime and education. I also have a degree in studio art. Credentials
  • 2022 Eddie award for consumer analysis
Queenie Wong
2 min read

Facebook CTO Mike Schroepfer says they're launching the challenge because the industry doesn't have a "great data set or benchmark" for identifying deepfakes.

James Martin/CNET

Altered videos called deepfakes, which can make it appear as if politicians, celebrities and others are doing or saying something they didn't, are a big headache for tech giants trying to combat misinformation.

Now Facebook, Microsoft and other tech companies are asking for more help finding these artificial intelligence-powered videos ahead of the 2020 election.

On Thursday, Facebook and Microsoft said they were teaming up with the Partnership on AI and academics from six colleges to create a challenge to help improve detection of deepfakes. These universities include Cornell Tech; MIT; University of Oxford; University of Maryland, College Park; University at Albany-SUNY; and University of California, Berkeley.

"The goal of the challenge is to produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer," Mike Schroepfer, Facebook's chief technology officer, said in a blog post

Deepfakes have already been created of Kim Kardashian, Facebook CEO Mark Zuckerberg and former President Barack Obama. Lawmakers, US intelligence agencies and others are concerned that deepfakes could be used to meddle in elections.

The US intelligence community's 2019 Worldwide Threat Assessment said that adversaries would probably attempt to use deepfakes to influence people in the US and in allied nations. This week, a report from New York University's Stern Center for Business and Human Rights predicted that deepfakes would likely affect the 2020 US elections. 

Schroepfer said they're launching the challenge because the industry doesn't have a "great data set or benchmark" for identifying deepfakes. The Deepfake Detection Challenge will include grants and awards, but Facebook didn't specify the amount. There will also be a leaderboard and data set, according to Facebook. 

The Partnership on AI's new Steering Committee on AI and Media Integrity, which includes various tech companies and academics, is overseeing the challenge.