Facebook to ban deepfakes ahead of 2020 US election
The move doesn't appear to prohibit all doctored videos.
Steven MusilNight Editor / News
Steven Musil is the night news editor at CNET News. He's been hooked on tech since learning BASIC in the late '70s. When not cleaning up after his daughter and son, Steven can be found pedaling around the San Francisco Bay Area. Before joining CNET in 2000, Steven spent 10 years at various Bay Area newspapers.
ExpertiseI have more than 30 years' experience in journalism in the heart of the Silicon Valley.
said Monday it would prohibit users from posting deepfakes, a form of video manipulated by artificial intelligence to show people doing or saying something they didn't. The move is intended to stop the spread of misinformation on the social network ahead of the 2020 US election.
The new policy, however, doesn't appear to ban all edited or manipulated videos, and would likely allow videos like the doctored clip of House Speaker Nancy Pelosi that went viral on the network last year. Facebook revealed the new policy in a blog post.
Deepfakes, which use AI to give a false impression of what politicians, celebrities and others are doing or saying, have become a headache for tech giants as they try to combat misinformation. Deepfakes have already been created of Kim Kardashian, Facebook CEO Mark Zuckerberg and former President Barack Obama, and lawmakers and US intelligence agencies worry they could be used to meddle in elections.
"While these videos are still rare on the internet, they present a significant challenge for our industry and society as their use increases," wrote Monika Bickert, vice president of Facebook global policy management. "Our approach has several components, from investigating AI-generated content and deceptive behaviors like fake accounts, to partnering with academia, government and industry to exposing people behind these efforts."
Watch this: We're not ready for the deepfake revolution
Facebook's new policy will prohibit videos that are "edited or synthesized" by techniques such as AI that aren't easy to identify as fake, Bickert wrote. But the new policy won't extend to videos edited for satire or parody, or to omit or change the order of words, she said.
The change in policy at the world's biggest social network comes amid rising concern that deepfake technology can be used to spread misinformation that could influence elections or disrupt society. A House energy and commerce subcommittee is scheduled to hold a hearing about the subject, titled "Americans at Risk: Manipulation and Deception in the Digital Age," on Wednesday morning. Bickert is scheduled to testify.
Social media companies have different approaches to misleading videos. In May, videos of Pelosi were doctored to make it seem as if she was drunkenly slurring her words. YouTube, which has a policy against "deceptive practices," took down the Pelosi video. Facebook displayed information from fact-checkers and reduced the spread of the video, although it acknowledged it could have acted more swiftly. Twitter didn't pull down the Pelosi video.
Facebook's previous rules didn't require that content posted to the social media giant be true, but the company has been working to reduce the distribution of inauthentic content. Previously, if fact-checkers determined the video to be misleading, distribution could be significantly curbed by demoting it in users' News Feeds.
Facebook called its approach to manipulated videos "critical" to its efforts to reduce misinformation on the social network.
"If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem," Bickert wrote. "By leaving them up and labelling them as false, we're providing people with important information and context."