X

Deepfakes are coming. Facebook, Twitter and YouTube might not be ready

That doctored Pelosi video may be the tip of the iceberg.

Queenie Wong Former Senior Writer
Queenie Wong was a senior writer for CNET News, focusing on social media companies including Facebook's parent company Meta, Twitter and TikTok. Before joining CNET, she worked for The Mercury News in San Jose and the Statesman Journal in Salem, Oregon. A native of Southern California, she took her first journalism class in middle school.
Expertise I've been writing about social media since 2015 but have previously covered politics, crime and education. I also have a degree in studio art. Credentials
  • 2022 Eddie award for consumer analysis
Queenie Wong
6 min read
Social media logos are seen on an Android mobile phone

Get ready, social networks. Deepfakes could make your lives miserable.

Photo Illustration by Omar Marques/SOPA Images/LightRocket via Getty Images

When House Speaker Nancy Pelosi showed up in an altered video that attacked her credibility, her words sounded choppy and confused. But it's the reaction by Facebook , Twitter and YouTube , which fueled the spread of the video, that sparked disagreement about how tech companies should handle manipulated content.

On May 22, a Facebook Page called Politics WatchDog posted the video, which was slowed to give the impression that the Democratic lawmaker from California was slurring her words. It quickly made its way to all three social networks. In an early taste of the challenges they could face during the 2020 US election, each had different responses.

Facebook allowed the video to remain on its service but displayed articles by fact-checkers. YouTube pulled it. Twitter let it stay on its platform.

The differing responses underscore the challenge that manipulated video, and misinformation more broadly, pose for the companies. The social networks have rules against posting intentionally misleading information, but they also try to encourage free expression. Finding a balance is proving difficult, particularly as what promises to be a particularly bruising election season heats up.

Pressure is building on them to find an answer.

On Thursday, the House Intelligence Committee held a hearing on manipulated media and "deepfakes," a technique that uses AI to create videos of people doing or saying something they didn't. The Pelosi video, a simpler form of edited video that some viewers thought was real, isn't considered a deepfake, but it's an example of how social media companies deal with manipulated content.

"The Pelosi video really highlighted the problems that social media companies face in making these judgment calls," said Eric Goldman, director of the High-Tech Law Institute at Santa Clara University. The video, he said, is misleading and was "weaponized," but he added it could be considered political commentary.

The problem will likely get worse. Deepfake software is already available online. Early deepfakes relied on hundreds or thousands of photographs of the person being faked to get convincing results. Because politicians lead public lives, plenty of photographs are available. 

But even that requirement is changing. Samsung recently said it's developed a technique that allows relatively realistic fake videos to be created from a single image. The approach will almost certainly be reverse-engineered, making it easier to fabricate misleading video.

Deepfake videos have been created of Kim Kardashian, Facebook CEO Mark Zuckerberg and former President Barack Obama. The quality of these fake videos has US intelligence agencies concerned they could be used to meddle in elections both in the US and in allied nations.

"Adversaries and strategic competitors probably will attempt to use deepfakes or similar machine-learning technologies to create convincing -- but false -- image, audio, and video files to augment influence campaigns directed against the United States and our allies and partners," the US intelligence community's 2019 Worldwide Threat Assessment said.

(An academic paper released Wednesday outlined a new technique for detecting deepfakes of world leaders, though it wouldn't work for everyday people.)

US lawmakers are urging tech giants to act swiftly. 

"Now is the time for social media companies to put in place policies to protect users from misinformation, not in 2021 after viral deepfakes have polluted the 2020 elections," said Rep. Adam Schiff, chairman of the House Intelligence Committee during Thursday's hearing. "By then, it will be too late."

Combating misinformation

Social media platforms admit they dropped the ball during the 2016 US presidential election, allowing Russian trolls to post false information and sow division among Americans. The major platforms have improved their defenses since then, though it's unclear whether they'll ever be fully prepared.

Facebook uses a mix of AI and human beings to flag offensive content and employs dedicated engineering teams that focus on systems for identifying manipulated photos, videos and audio. It has also been examining whether it needs a more specific policy to tackle manipulated media, according to a report by MarketWatch.

A computer screen shows an image of Vladimir Putin next to an doctored image of Putin.

Deepfake videos have been created of high-profile politicians, celebrities and tech moguls.

Alexandra Robinson/AFP/Getty Images

"Leading up to 2020 we know that combating misinformation is one of the most important things we can do," a Facebook spokesperson said in a statement. "We continue to look at how we can improve our approach and the systems we've built. Part of that includes getting outside feedback from academics, experts and policymakers."

Still, there's no guarantee that fake news will be pulled from the world's biggest social network even if monitoring systems flag it. That's because Facebook has long said it doesn't want to be "arbiters of truth." Its community standards explicitly state that false news won't be removed, though it will be demoted in its News Feed. "There is also a fine line between false news and satire or opinion," the rules state. (Facebook will remove accounts if users mislead others about their identity or purpose and if their content incites violence.)

A spokesperson for Google-owned YouTube said that the company is aware of deepfakes and has teams focused on these videos. The company said it's also exploring and investing in ways to deal with manipulated videos, but didn't share specifics.

The video-sharing site has a policy against "deceptive practices" that prohibits the use of titles, descriptions, thumbnails or tags that "trick users into believing the content is something it is not."

Twitter has also cracked down on fake accounts, looking for stolen profile pictures or bios. It recently simplified its rules to make clear what is and isn't allowed.

But Twitter didn't pull the Pelosi video and declined to comment. The company would take action against a video if it included misleading statements about voting, according to Twitter's rules. Its election integrity policy also states that "inaccurate statements about an elected official, candidate or political party" generally don't violate their rules.

Watch this: Senate takes on deep fakes with Sheryl Sandberg and Jack Dorsey

Different approaches

Social media giants interpret their own rules. That can make their actions seem random or arbitrary, academics and experts say. If a video is removed from one site, it'll often migrate to another.

That's what happened with the Pelosi video posted on Facebook. CNET was able to find the video this week on YouTube, but a spokesman said YouTube was removing re-uploads of the video.

Speaker of the House Nancy Pelosi

US Speaker of the House Nancy Pelosi has criticized Facebook for not removing an altered video that made her seem drunk.

Win McNamee / Getty Images

Hany Farid, a computer science professor and digital forensics expert at the University of California, Berkeley, says Facebook's terms of service state that users can't use the social network's products for any activity that is "unlawful, misleading, discriminatory or fraudulent" or "infringes or violates someone else's rights." The Pelosi video, he said, runs afoul of the company's rules.

"I simply don't buy that argument that Facebook has dealt with the problem by flagging the video as 'fake' and by downgrading it on the News Feed," he said. "This type of misinformation is harmful to our democracy and can impact the way that people think and vote."

A Facebook representative didn't answer questions regarding Farid's assertion. Pelosi, who slammed Facebook for not removing the altered video, didn't respond to a request for comment.

Some of the fact-checkers who work with Facebook say pulling down doctored videos could have unintended consequences. "If you leave [the video] up, you're able to track it and control it," said Alan Duke, editor-in-chief of Lead Stories, one of Facebook's fact-checking partners.

Data & Society researcher Britt Paris said labeling the videos wouldn't discourage social media users from sharing or creating fake content. Some people just share content "because a message speaks to what a user sees as an implicit truth of the world even as they know it is not factually true."

Lies spread faster than truth on social media, according to studies.

Social networks could also start tracking users who share fake news and reduce their reach, which would discourage them from posting misinformation.

"If these social media companies are going to continue to exist at the scales they currently operate, they are going to have to start making these types of decisions," she said.

Part of the problem, Goldman says, is that social media users just ascribe too much truth to video.

"We see it with our own eyes, we hear with our own ears and we assume that means it's true," Goldman said.

Originally published June 13, 4:00 a.m. PT
Update, 11:48 a.m. PT: Includes information from House Intelligence Committee hearing.