Facebook, YouTube, Twitter must team up to fight threat from deepfakes, experts say

Deepfake specialists tell Congress that for society and democracy to weather the storm of manipulated media, social-media competitors must work together.

Joan E. Solsman Former Senior Reporter
Joan E. Solsman was CNET's senior media reporter, covering the intersection of entertainment and technology. She's reported from locations spanning from Disneyland to Serbian refugee camps, and she previously wrote for Dow Jones Newswires and The Wall Street Journal. She bikes to get almost everywhere and has been doored only once.
Expertise Streaming video, film, television and music; virtual, augmented and mixed reality; deep fakes and synthetic media; content moderation and misinformation online Credentials
  • Three Folio Eddie award wins: 2018 science & technology writing (Cartoon bunnies are hacking your brain), 2021 analysis (Deepfakes' election threat isn't what you'd think) and 2022 culture article (Apple's CODA Takes You Into an Inner World of Sign)
Joan E. Solsman
3 min read

Rep. Devin Nunes and Committee Chairman Rep. Adam Schiff lead the House intelligence committee. 

Getty Images

Social media companies like Facebook, YouTube and Twitter weren't anywhere near a US House hearing on deepfakes Thursday, but they were still the stars of the show. The hearing, which focused on how manipulated media like deepfakes could threaten democracy, repeatedly harped on the social networks' role in the threat, with experts suggesting policies sure to give those companies pause. 

One crucial suggestion? Experts said social-media companies must work together to develop shared policies about what manipulated media should stay up and what must come down. Another -- maybe even scarier -- prospect for the likes of Facebook and YouTube? Experts recommended walking back the companies' legal immunity from the content their users post. 

"The internet is not in its infancy. It shouldn't be a free pass," Danielle Citron, a law professor at the University of Maryland School of Law, told the committee Thursday.

Like Photoshop on sterioids, deepfakes are video forgeries powered by artificial intelligence that can make people appear to be doing or saying things they never did. Digital manipulation of video is nothing new, but deepfake tools now mean manipulated clips are both easier to make and increasingly hard to detect as fraud.

The US House Intelligence committee questioned experts Thursday morning about such manipulated media and how it can threaten national security, society and democracy. The committee chair, Democrat Rep. Adam Schiff, called deepfakes a "nightmarish" scenario for the 2020 presidential elections, with voters potentially "struggling to discern what is real and what is fake."

Devin Nunes, the committee's ranking Republican member, brought out a standard refrain from conservatives about Silicon Valley: the allegation that tech giants are suppressing voices on the right. 

Experts on the House's panel argued that a recent deepfake of Facebook CEO Mark Zuckerberg was a positive development for the public's understanding of these kinds of sophisticated forgeries. In the video, a digital puppet of Zuckerberg says he can control the future thanks to the power of Facebook's data. 

"Nobody really believes Mark Zuckerberg can control the future, because he surely wouldn't want to show up to testify here or anywhere else, or be in the quagmire he's in," said Clint Watts, a fellow at the think tank Foreign Policy Research Institute and the bipartisan Alliance for Securing Democracy. The deepfake, which was posted to Facebook platforms specifically to test the company's policies about removing manipulated media, was an exercise in how the context of a deepfake matters in regard to how much of a threat a clip poses. 

In the case of the Zuckerberg deepfake, the fact that the video made such ridiculous claims worked in its favor for being allowed to remain up, Watts said. 

In response to a request for comment on the House hearing, Facebook said that "combating misinformation is one of the most important things" the company can do leading up to the 2020 election. "We continue to look at how we can improve our approach and the systems we've built. Part of that includes getting outside feedback from academics, experts and policymakers," the company said in a statement. 

Twitter said it's looking at how it may take action through both policy and product on these types of issues in the future. For now its policies are under close review, the company added in a statement.

YouTube didn't respond to a message seeking comment.