AI now can spot fake news generated by AI

It takes AI to know AI.

Shelby Brown Editor II
Shelby Brown (she/her/hers) is an editor for CNET's services team. She covers tips and tricks for apps, operating systems and devices, as well as mobile gaming and Apple Arcade news. Shelby also oversees Tech Tips coverage. Before joining CNET, she covered app news for Download.com and served as a freelancer for Louisville.com.
  • She received the Renau Writing Scholarship in 2016 from the University of Louisville's communication department.
Shelby Brown
2 min read

This AI is one step ahead of... itself. 

Josh Goldman/CNET

Researchers at Harvard University and the MIT-IBM Watson AI Lab have created a tool to help combat the spread of misinformation. The tool, called GLTR (for Giant Language Model Test Room), uses artificial intelligence to detect the very statistical text patterns that give AI away, according to the team's June report. 

GLTR highlights words in the text based on the likelihood that they'll appear again -- green is the most predictable, red and yellow are less predictable, and the least predictable is purple. 

A tool like that could come in handy for social media sites like Twitter and Facebook that have to contend with rampant content created by bots.

Sebastian Gehrmann, one of the minds behind GLTR, said that as text generation methods become more sophisticated, malicious actors can potentially abuse them to spread false information or propaganda. 

"Someone with enough computing power could automatically generate thousands of websites with real looking text about any topic of their choice," Gehrmann said. "While we have not quite arrived at this point of focused generation yet, large language models can already generate text that is indistinguishable from human-written text."

Gehrmann said his team conducted a study to see if language-processing students could differentiate "real" text from AI-generated text. He said the students' accuracy was at 54%, barely above randomly guessing. Using GLTR improved the students' detection rates to 72%. 

"We hope that GLTR can inspire more research toward similar goals, and that it successfully showed that these models are not entirely too dangerous if we can develop defense mechanisms," Gehrmann said.

GLTR is free and available for people to try

Originally published July 31, 1:43 p.m. PT.
Update, Aug. 2: Adds comments from Harvard researcher Sebastian Gehrmann.

Watch this: Twitter to start hiding tweets that violate policies, Zuckerberg speaks out on doctored video

Meet the women fighting fake news on Facebook

See all photos