Altered videos called, which can make it appear as if politicians, celebrities and others are doing or saying something they didn't, are a big headache for tech giants trying to combat misinformation.
Now Facebook, Microsoft and other tech companies are asking for more help finding these artificial intelligence-powered videos ahead of the 2020 election.
On Thursday, Facebook and Microsoft said they were teaming up with the Partnership on AI and academics from six colleges to create a challenge to help improve detection of. These universities include Cornell Tech; MIT; University of Oxford; University of Maryland, College Park; University at Albany-SUNY; and University of California, Berkeley.
"The goal of the challenge is to produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer," Mike Schroepfer, Facebook's chief technology officer, said in a blog post.
Deepfakes have already been created of Kim Kardashian, Facebook CEO Mark Zuckerberg and former President Barack Obama. Lawmakers, US intelligence agencies and others are concerned that deepfakes could be used to meddle in elections.
The US intelligence community's 2019 Worldwide Threat Assessment said that adversaries would probably attempt to use deepfakes to influence people in the US and in allied nations. This week, a report from New York University's Stern Center for Business and Human Rights predicted that deepfakes would likely affect the 2020 US elections.
Schroepfer said they're launching the challenge because the industry doesn't have a "great data set or benchmark" for identifying deepfakes. The Deepfake Detection Challenge will include grants and awards, but Facebook didn't specify the amount. There will also be a leaderboard and data set, according to Facebook.
The Partnership on AI's new Steering Committee on AI and Media Integrity, which includes various tech companies and academics, is overseeing the challenge.