Graham Brookie's phone buzzed, jolting him from a deep sleep. It was too early for his morning alarm, and his apartment was pitch black. He fumbled out of bed and silenced his phone as it buzzed over and over. The screen was an infinite scroll of menacing messages, offensive memes and conspiracy-laden tweets. Most people would be shocked at the torrent of vitriolic social media content, but Brookie just rolled his eyes. He was all too familiar with this type of attack and knew exactly what was coming next.
The notifications lighting up Brookie's phone were part of a Russian disinformation campaign that his employer, the Atlantic Council's Digital Forensic Research Lab, was monitoring in collaboration with Facebook and the US State Department. To the layperson, the posts are designed to look authentic and are sent from real-looking accounts with convincing user avatars and bios. However, the accounts are controlled by an army of trolls and bots engaged in influence operations, publishing "fake news" and spreading disinformation on YoutTube and social .
Tracking and exposing disinformation
In the aftermath of the 2016 US presidential election, technology firms like Facebook, Twitter and YouTube have been forced to reckon with the realities of bad actors manipulating social media for nefarious purposes. Famously, the firm Cambridge Analytica was caught extracting and exploiting Facebook data to micro-target voters in the states with political propaganda during the election. The anti-vaccination movement has been amplified by bots and trolls for years. And as continues to spread, conspiracy theorists are circulating rumors that the virus can be cured by drinking bleach or snorting cocaine.
"Social media platforms are the battlegrounds of an information war that is being waged all day, every day online," says Brookie. "Basic facts are under attack." The goal is to snatch and divert attention toward divisive topics. Because it's inexpensive and relatively easy to exploit social media, the bad actors are diverse and the tactics nonpartisan.
For example, though Cambridge Analytica used Facebook data to aid Republican candidate Donald Trump in 2016, the following year Democratic strategists allegedly used a similar social media strategy to help Doug Jones in an Alabama special election for Senate (apparently without Jones' knowledge).
To fight the spread of harmful false information, many social media firms have adjusted their content policies. Twitter recently rolled out features to help users flag harmful content and label misleading posts. YouTube suspended advertising for questionable content, and its recommendation algorithm no longer elevates conspiracy theory content.
In response to the Cambridge Analytica scandal, in 2018 Facebook partnered with third-party fact-checkers, including Brookie's employer, the Atlantic Council. Based in Washington, DC, the Atlantic Council is a nonprofit and nonpartisan think tank founded in 1961 with a mission to strengthen diplomatic and economic cooperation between North America and Europe. The Atlantic Council hosts events attended by global leaders, publishes research papers and encourages policy consensus on contentious issues like cyber-conflict. The group operates independently but receives funding from over 25 governments. Its benefactors include the US State Department, NGOs and private firms.
Like most think tanks, the Atlantic Council is not without controversy and is occasionally dogged by accusations that nation-states and corporations try to purchase influence in the organization through donations. A 2014 report about the Transatlantic Trade and Investment Partnership, a proposed trade partnership between the EU and the United States, produced in collaboration with FedEx, drew criticism because the company was concurrently lobbying Congress to reduce trade tariffs.
Concerned about the growing influence of social media, Brookie, a former National Security Council advisor under President Barack Obama, founded the Digital Forensic Research Lab in 2016. It rapidly grew to become one of the Atlantic Council's largest groups. Its mission, he says, is to promote objective fact as a foundation of government. "Our team members work to protect democratic institutions and norms from those who would undermine them online and to identify, expose and explain disinformation when and where it occurs," he explains. Since its launch, the DFRL has published over 800 case studies, most of which are available on the group's Medium blog, on events like the use of bots in the recent Malaysian election. Earlier this year, CNET News and the DFLR collaborated on an investigation into .
Solutions to 'fake news'
Though "fake news," misinformation and disinformation are often used interchangeably, for the DFRL team language matters. The phrase "fake news" has been nearly unavoidable since the 2016 election, but the team avoids using the term because it's been co-opted by "authoritarians across the globe to curb dissent and put real journalists at risk," says Brookie.
As a result, the DFRL prioritizes transparency in its research and often tries to use "open" data in its reports. Open data is information that's available to the public -- Google Street View or publicly posted images of military installations, for example -- that might provide useful clues for research analysts.
"Bad actors are almost always opaque. We're the opposite. Our work should be open to scrutiny. The whole point of open-source research is just that, to make our work, including our sources, our method and our conclusions, open and transparent," Brookie says.
The DFRL is perhaps best-known for its 2015 report Hiding in Plain Sight: Putin's War in Ukraine, which used open data, including selfies taken by Russian troops and posted on social media, to prove that Russia was occupying eastern Ukraine, an allegation President Vladimir Putin denied at the time.
Brookie's team of investigators apply transparent open data research techniques with their own own proprietary information. In 2016, the team published a report called Breaking Aleppo that documented human rights abuses, including war crimes, during the Assad regime's assault on the Syrian city.
"Much of this initial open-source research was focused on closed information environments, in which we had to put pieces together to have a broader assessment and in which the open-source community is still thinking through evidentiary standards for using online material in things like investigations into war crimes," Brookie explains.
The DFRL team pulls publicly available data from social networks using open-source and commercial tech products like Google Docs to identify and label inauthentic internet content as either "misinformation" or "disinformation."
Misinformation, explains Brookie, is the spread of false information without intent: "It is more passive and more pervasive," like a joke or a rumor. Disinformation is the dissemination of false information with intent to achieve a political, social or financial goal.
Analysts on the DFRL team carefully log patterns and instances of disinformation in spreadsheets, then provide the data to companies like Facebook, which combine the open data with the firm's private logs and take action by removing fake profiles, pages and groups.
These coordinated disinformation campaigns thrive when they go unnoticed and unchecked. To help the public better understand the pervasive nature of these campaigns, the DFRL published hundreds of examples of disinformation on GitHub and worked with Google's Jigsaw to create a data visualization called Dichotomies of Disinformation. The interactive map allows users to drill down and view specific targets and platforms, as well as download spreadsheets of source material.
Experts at similar think tanks, including the World Economic Forum and the German Marshall Fund's Alliance for Securing Democracy, believe there will always be bad actors who exploit social networks to propagate disinformation. But there's a consensus that the best way to curb its influence is through education and public awareness. Brookie believes that by collaborating with companies like Facebook and Jigsaw, the DFRL has been able to scale with the problem in a sustainable way.
"Disinformation is a collective challenge, and it requires a collective response that includes government, media, social media platforms and -- most importantly -- every one of us," Brookie says. "The work of identifying, exposing and explaining disinformation is collaborative and often iterative. And we truly believe that more people doing this work is better than less people. That's why we are working with our team to grow the international community of [our investigators] committed to the fight for facts."