Facebook has been studying a way to figure out if come from the same source, research that could help the social media giant crack down on disinformation campaigns.
Deepfakes use artificial intelligence to generate images of people or objects that don't exist. The technology can also be used to make videos that mimic politicians, celebrities or others doing or saying something they didn't. As deepfake technology improves, tech companies are preparing for the possibility that fake content could be used to spread more disinformation.
Teaming up with Michigan State University, Facebook has been researching a reverse engineering method in which researchers use a single AI-image to learn more about how the fake content was created. This method helps them determine if the fake content came from the same source even if it was shared on different online platforms.
"This ability to detect which deepfakes have been generated from the same AI model can be useful for uncovering instances of coordinated disinformation or other malicious attacks launched using deepfakes," Facebook said in a blog post.
The social network said researchers examined the "device fingerprints," or the unique pattern left by the model used to create the deepfake, to help them figure out where the image came from. Researchers then looked more closely at the components of the AI model.
"Our reverse engineering technique is somewhat like recognizing the components of a car based on how it sounds, even if this is a new car we've never heard of before," Facebook said.
Researchers from Michigan State University put together a dataset with 100,000 fake images generated from 100 publicly available generative models to test the reverse engineering method. The university is open sourcing the data set, code and trained models so other researchers have more tools to help study the detection and origin of deepfakes.