Propaganda campaigns are more than fake news. Trolls have learned to game social media algorithms and create their artificial viral moments.
Going viral used to be harmless.
Chewbacca Mom got more than 162 million views on Facebook while laughing hysterically for four minutes and ended up on "The Ellen DeGeneres Show." The Mannequin Challenge was a goofy trend that got friends collaborating on elaborately staged videos. Tay Zonday sang his "Chocolate Rain" ballad on YouTube in 2007 and became an internet sensation.
But over the last few years, trolls learned how to turn trending moments into a tool for spreading misinformation. The same way that videos of cute cats spread online, trolls have figured out how to tap into what makes people want to share on social media and use it to popularize outrage and fake news.
The fallout is more serious than a spot on a daytime talk show -- it's widely believed that the rapid spread of fake news and increasingly divisive environment online swayed the 2016 US presidential election. Now Facebook and Twitter face criticism that they've lost control of their platforms as algorithms promote fake news as "trending topics."
For Russia especially, viral content has become a powerful weapon. In September, Twitter discovered 201 Russian-linked accounts dedicated to spreading fake outrage, while Facebook found about 500 accounts doing the same. These accounts pretended to be gun rights advocates and Black Lives Matter activists, taking up both sides of debates, with the primary goal to make noise. All together, the fake accounts on Facebook had been seen more than 10 million times, and that was just for sponsored content.
If fake news is meant to misinform people, fake fights are designed to divide and distract. By spreading outrage, Russian trolls are able to bury legitimate news while driving people further apart. The conflict helps countries effectively control their propaganda, a key strategy detailed in Russia's military doctrine approved in 2014.
"Cybersecurity is no longer about protecting our data from theft," Rep. Don Beyer, a Democrat from Virginia, said during a hearing on cybersecurity last week.
"It's also about defending our democracy from disinformation campaigns that combine cyber assaults with influence operations."
Russian magazine RBC investigated a Russian trolling operation and found that it reached 30 million people a week on Facebook at the height of the 2016 US presidential election (the article is from a Russian magazine and has not been translated. An English summary is available here).
Here's how Russian trolls used social media to effectively wreak havoc in the US.
Going viral isn't as simple as flipping a switch, but for Russian troll factories -- with access to an army of bots on social media, it might as well be.
Ben Nimmo, a defense and international security analyst with the Atlantic Council's Digital Forensic Research Lab, described the manufactured viral content as a three-step process.
"The goal of a propagandist is to spread your message, and the best way to do that is to get people to do it for you," Nimmo said. "You can't tell a million people what to do. You need to get 10 people, and they spread it amass."
The campaign's goal is to get the topic on the trending hashtags, which would mean their fake outrage has hit the mainstream.
Nimmo has been following the spread of fake news and trolls using bots to spread propaganda. Through all the campaigns, he's spotted an attack with three stages:
The attacks are not always successful and Twitter's gotten better at spotting bot campaigns. There's always a thin line that trolls have to walk to make sure their campaign goes viral without getting caught.
"If you make it too many, you're going to get spotted. If you make it too few, you won't go viral," Nimmo said.
Twitter said it's had measures in place to prevent bots from cheating the Trending Topics list since 2014, citing a Sept. 28 blog post. It found an average of 130,000 shepherd accounts a day following the process that Nimmo discussed. Over the last year, its automated system caught 3.2 million suspicious accounts per week, a Twitter spokeswoman said.
But while it's easy to spot bots, campaigns controlled by humans are much harder to spot.
"It's much trickier to identify non-automated coordination, and the risks of inadvertently silencing legitimate activity are much higher," Twitter said.
When Facebook announced that it had discovered hundreds of Russian accounts masquerading as groups arguing about US issues, it sounded all too familiar to Moira Whelan.
Whelan remembered warning Facebook about this exact thing in 2014, when she was a digital strategy assistant secretary for the Department of State. It was the height of the Ukraine-Russia conflict, and Whelan, along with state department officials from other countries, reached out to Facebook about a rise in manufactured arguments.
"Their algorithm is reactionary to things like 'happy birthday' and 'congratulations,' but also to fights," Whelan said. "Russians would simulate these fights and it would go up in people's feeds. We brought that to Facebook's attention, and it didn't register as a problem."
Facebook didn't respond to a request for comment.
Whelan used to notice Russian spambots clogging the comments on embassy Facebook pages, in an attempt to drown out real people. And then it suddenly stopped in 2014, when Russia began occupying Ukraine. The new tactic switched to a rise in fake news and simulated fights.
They had taken advantage of Facebook's algorithm, something Lior Abraham, founder of behavior analytics company Interana and former Facebook engineer, never anticipated when he helped create the news feed.
Abraham had worked at Facebook between 2007 and 2013, developing key functions on the news feed, as well as creating a data analytics tool called Scuba that the social network still uses.
When he helped build the news feed, the goal was always to promote engagement with your friends and family, and not political discourse.
"We would just give priority to break-up stories and photos at the time," Abraham said. Through the years, the algorithm would get tweaked to include more artificial intelligence and less of a human touch.
But the focus on engagement pushed arguments to the forefront, creating a news feed that Abraham can hardly recognize anymore.
"It's contrary to the original mission of creating communities," Abraham said. "You're just dividing larger communities."
So if you've noticed your Facebook feed getting more negative, that's because its algorithm has been promoting arguments, Whelan said. And with the rise of bots, and trolls getting more sophisticated, it's becoming harder to tell if that person you're arguing with is even real.
These propaganda campaigns are successful on social media because they use the same strategies that businesses do.
They use tools like Facebook's CrowdTangle, which tracks popular and trending posts. These trolling operations also schedule posts throughout the day and pay for promoted content -- just like any other social media manager.
These accounts get an unfair advantage by having thousands of bot accounts to drive up engagement at their command.
"If everyone in your newsroom retweets your story, that's nice, but that's about 35 people," Whelan said. "They're getting into 30,000 or more."
Jonathan Albright, a research director at Columbia University's Tow Center for Digital Journalism, looked closely at how these fake accounts operated and noticed how similar they were to businesses.
He saw hand-off times and marketing tools that helped trolls stir things up for even more people.
"They've really pushed outrage and negative reactions," Albright said. "They're using the same analytics tools that spammers use. They see the top trending story and see what people are already angry about, and frame it in that political narrative."
Rebooting the Reef: CNET dives deep into how tech can help save Australia's Great Barrier Reef.
The Smartest Stuff: Innovators are thinking up new ways to make you, and the things around you, smarter.