Facebook steps up efforts to study deepfakes

Such AI-generated images could be used to spread disinformation, but spotting them isn't easy.

Queenie Wong Former Senior Writer
Queenie Wong was a senior writer for CNET News, focusing on social media companies including Facebook's parent company Meta, Twitter and TikTok. Before joining CNET, she worked for The Mercury News in San Jose and the Statesman Journal in Salem, Oregon. A native of Southern California, she took her first journalism class in middle school.
Expertise I've been writing about social media since 2015 but have previously covered politics, crime and education. I also have a degree in studio art. Credentials
  • 2022 Eddie award for consumer analysis
Queenie Wong
2 min read

Spotting fake news could become harder in the future because of deepfake technology.

Graphic by Pixabay; illustration by CNET

Facebook has been studying a way to figure out if deepfake images come from the same source, research that could help the social media giant crack down on disinformation campaigns.

Deepfakes use artificial intelligence to generate images of people or objects that don't exist. The technology can also be used to make videos that mimic politicians, celebrities or others doing or saying something they didn't. As deepfake technology improves, tech companies are preparing for the possibility that fake content could be used to spread more disinformation.

Teaming up with Michigan State University, Facebook has been researching a reverse engineering method in which researchers use a single AI-image to learn more about how the fake content was created. This method helps them determine if the fake content came from the same source even if it was shared on different online platforms.

"This ability to detect which deepfakes have been generated from the same AI model can be useful for uncovering instances of coordinated disinformation or other malicious attacks launched using deepfakes," Facebook said in a blog post.

The social network said researchers examined the "device fingerprints," or the unique pattern left by the model used to create the deepfake, to help them figure out where the image came from. Researchers then looked more closely at the components of the AI model.

"Our reverse engineering technique is somewhat like recognizing the components of a car based on how it sounds, even if this is a new car we've never heard of before," Facebook said.

Researchers from Michigan State University put together a dataset with 100,000 fake images generated from 100 publicly available generative models to test the reverse engineering method. The university is open sourcing the data set, code and trained models so other researchers have more tools to help study the detection and origin of deepfakes.