Facebook said Monday it would prohibit users from posting deepfakes, a form of video manipulated by artificial intelligence to show people doing or saying something they didn't. The move is intended to stop the spread of misinformation on the social network ahead of the 2020 US election.
The new policy, however, doesn't appear to ban all edited or manipulated videos, and would likely allow videos like the doctored clip of House Speaker Nancy Pelosi that revealed the new policy in a blog post.last year. Facebook
The new guidelines had been reported earlier by The Washington Post.
, which use AI to give a false impression of what politicians, celebrities and others are doing or saying, have become a headache for tech giants as they try to combat misinformation. Deepfakes have already been created of Kim Kardashian, and former , and lawmakers and US intelligence agencies worry they could be used to meddle in elections.
"While these videos are still rare on the internet, they present a significant challenge for our industry and society as their use increases," wrote Monika Bickert, vice president of Facebook global policy management. "Our approach has several components, from investigating AI-generated content and deceptive behaviors like fake accounts, to partnering with academia, government and industry to exposing people behind these efforts."
Facebook's new policy will prohibit videos that are "edited or synthesized" by techniques such as AI that aren't easy to identify as fake, Bickert wrote. But the new policy won't extend to videos edited for satire or parody, or to omit or change the order of words, she said.
The change in policy at the world's biggest social network comes amid rising concern that deepfake technology can be used to spread misinformation that could influence elections or disrupt society. A House energy and commerce subcommittee is scheduled to hold a hearing about the subject, titled "Americans at Risk: Manipulation and Deception in the Digital Age," on Wednesday morning. Bickert is scheduled to testify.
Social media companies have different approaches to misleading videos. In May, videos of Pelosi were doctored to make it seem as if she was drunkenly slurring her words. YouTube, which has a policy against "deceptive practices," took down the Pelosi video. Facebook displayed information from fact-checkers and reduced the spread of the video, although it acknowledged it could have acted more swiftly. Twitter didn't pull down the Pelosi video.
Facebook's previous rules didn't require that content posted to the social media giant be true, but the company has been working to reduce the distribution of inauthentic content. Previously, if fact-checkers determined the video to be misleading, distribution could be significantly curbed by demoting it in users' News Feeds.
In September, Facebook said it was teaming up with Microsoft, the Partnership on AI and academics from six colleges to . The challenge was announced after the US intelligence community's 2019 Worldwide Threat Assessment warned that adversaries would probably attempt to use deepfakes to influence people in the US and in allied nations.
Facebook called its approach to manipulated videos "critical" to its efforts to reduce misinformation on the social network.
"If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem," Bickert wrote. "By leaving them up and labelling them as false, we're providing people with important information and context."
CNET's Queenie Wong contributed to this report.