X

Researchers propose detecting deepfakes with surprising new tool: Mice

One of the scariest uses of artificial intelligence yet is being decoded by tiny mammals with tails.

Eric Mack Contributing Editor
Eric Mack has been a CNET contributor since 2011. Eric and his family live 100% energy and water independent on his off-grid compound in the New Mexico desert. Eric uses his passion for writing about energy, renewables, science and climate to bring educational content to life on topics around the solar panel and deregulated energy industries. Eric helps consumers by demystifying solar, battery, renewable energy, energy choice concepts, and also reviews solar installers. Previously, Eric covered space, science, climate change and all things futuristic. His encrypted email for tips is ericcmack@protonmail.com.
Expertise Solar, solar storage, space, science, climate change, deregulated energy, DIY solar panels, DIY off-grid life projects. CNET's "Living off the Grid" series. https://www.cnet.com/feature/home/energy-and-utilities/living-off-the-grid/ Credentials
  • Finalist for the Nesta Tipping Point prize and a degree in broadcast journalism from the University of Missouri-Columbia.
Eric Mack
2 min read
fullhouse1

Nick Offerman makes for one creepy little girl in this disturbing Full House deepfake video.

Video screenshot by Bonnie Burton/CNET

Decades after Terminator's Skynet first taught us to fear the apocalyptic potential of artificial intelligence, deepfakes represent a less deadly but very real threat from AI. Some researchers are now using a surprising and definitively analog tool to detect AI-manipulated audio: mice.

While faking audio and video has been around in some form for decades, machine learning has recently made it significantly easier to produce counterfeit speech that actually crosses the uncanny valley into the realm of believability.

Deepfake technology shows no signs of slowing down, so researchers are looking for the best tools to detect the fakes, including people, different artificial intelligence and yes, rodents.

Watch this: We're not ready for the deepfake revolution

"We believe that mice are a promising model to study complex sound processing," reads a white paper from a trio of researchers led by Jonathan Saunders from the University of Oregon Institute of Neuroscience. "Studying the computational mechanisms by which the mammalian auditory system detects fake audio could inform next-generation, generalizable algorithms for spoof detection."

In other words, mice have an auditory system similar to humans, except they can't understand words they hear. This lack of understanding could actually be a bonus for detecting fake speech, however, because mice can't be swayed to overlook the telltale signs of a fake while they're focusing on decoding the actual meaning of the words.

For example, a deepfake audio file might include a subtle mistake, like the sound of "b" where a "g" should be. Let's say some faked speech of a celebrity portrays them ordering a "hamburber," for instance. Humans might be inclined to pass over this red flag for fakery because we're trained to extract the meaning from sentences we hear while adjusting for verbal flubs, accents and other inconsistencies.

The team has succeeded in training mice to distinguish between the sounds of certain consonant pairs, which could be useful in detecting fake speech. The research was presented in a session at the Black Hat conference in Las Vegas on Aug. 7.

The mice correctly identified speech sounds at rates up to 80%. That's actually lower than the 90% rate at which the researchers found humans were able to identify deepfakes.

But the idea isn't to train an army of rodents to identify deepfakes. Instead, scientists hope to monitor the brain activity of mice as they discern between fakes and authentic speech to learn how the brain does it. Then the goal is to train new fake-detecting algorithms with the insights gleaned from the little animals.

That's presuming the rodents don't get wise and start creating their own despicable deepfakes first.

Originally published Aug. 12, 10:59 p.m. PT.

Update, Aug. 13 at 9:26 p.m.: Adds more information.