X

Facebook: We're making life hard on trolls trying to influence you

Nathaniel Gleicher, Facebook's head of cybersecurity policy, tells CNET about the challenges of stopping the coordinated efforts to influence US politics.

Laura Hautala Former Senior Writer
Laura wrote about e-commerce and Amazon, and she occasionally covered cool science topics. Previously, she broke down cybersecurity and privacy issues for CNET readers. Laura is based in Tacoma, Washington, and was into sourdough before the pandemic.
Expertise E-commerce, Amazon, earned wage access, online marketplaces, direct to consumer, unions, labor and employment, supply chain, cybersecurity, privacy, stalkerware, hacking. Credentials
  • 2022 Eddie Award for a single article in consumer technology
Laura Hautala
6 min read
facebook-logo-8037

Blocking political influence campaigns is a lot like stopping hackers, says Nathaniel Gleicher, Facebook's head of cybersecurity policy.

James Martin/CNET

Nathaniel Gleicher used to fight hackers from the White House as member of the National Security Council during the Obama administration. Now, he's trying to stop coordinated influence campaigns from spreading across Facebook.

The two jobs aren't that different, he says.

As head of Facebook's head of cybersecurity policy, he's responsible for the team that's trying to stop fakers from infiltrating politics-oriented groups on Facebook. Doing that requires a lot of the same tools he'd use to stop hackers from breaking into the social network's computers.

In both cases, Gleicher's teams employs a combination of automatic tools and old-fashioned detective work to suss out whether someone is who they say they are.

In the first three months of this year, Facebook said it removed 583 million accounts within minutes of their creation. That may seem like a lot, considering it's more than the entire population of the US, Mexico and Canada combined. But, Gleicher said, people still try. 

A lot.

headshot of Nathaniel Gleicher

Nathaniel Gleicher is Facebook's head of cybersecurity policy.

Facebook

Stopping them is a key part of tackling coordinated influence campaigns, which rely on fake accounts to promote events and buy fake ads -- like this one about supporting the police seen by more than one million Facebook users -- that tend to have a political bent. Troll-backed social media campaigns can go viral, with the goal of sparking divisive online arguments. That's why Facebook wants to stop them before they get started by spotting all fake accounts from the start.

"Everything we do to identify and stop fraud, also makes things harder for these sophisticated actors," Gleicher said.

Gleicher's job has been in the news a lot lately. On Tuesday, Facebook said it found fake accounts collaborating with real activists from Washington, DC to help promote a protest. The discovery -- and Facebook's decision to take down an event page started by the fake accounts but run by legitimate activists -- underscored just how challenging the problem of information campaigns are to solve.  

The decision required Facebook to eliminate the pages of some legitimate users because they had become involved with accounts that weren't authentic. It wasn't an easy call but Gleicher says it was necessary to show the fakers their efforts to insert themselves in American activism were failing.

With Alex Stamos, Facebook's chief security officer, finalizing his departure, Gleicher's role takes on a new importance. In an interview with CNET, Gleicher explained what Facebook does with pages created by fake accounts and what it's doing to predict future abuse of its platform.

Here are edited excerpts from our conversation.

Q: What is your goal here when it comes to misinformation campaigns and fake news? What does Facebook consider doable and what does it consider not doable?

It's not something that we're ever going to solve. We're never going to eliminate all information campaigns from communication. That's not something that any social media platform could do. That's not something that society could do.

There's an old joke about security and it fits here. It's easy to make a computer secure. The way you do it is you turn it off, you unplug it and you throw it in the ocean. 

There's always risk. But what you can do, and what we're focused on, is you can actually make it more difficult for threat actors to engage in the behaviors that they use to manipulate the platform.

Activists based in Washington, DC have said they're unhappy that Facebook took down the event page for the No Unite the Right 2 counterprotest to the Charlottesville anniversary march planned by white supremacists in DC. How do you respond to that and where do you draw the line on deleting event pages that may be tinged with foreign influence?

This was one of the trickiest parts: One of the things we saw that this threat actor was trying to do was to try to intermingle their activities with legitimate activists in order to pose the challenge you're talking about.

Watch this: US charges Russian trolls over election meddling

So there's a trap here. If you completely remove the event, you are eliminating the authentic speakers' ability to persist. [If you leave it up], you're essentially letting inauthentic actors get on the platform and create a whole bunch of events. Then, even if their accounts get taken down, the events persist, which creates this perverse incentive for inauthentic actors.

We identified the authentic co-hosts and we reached out to them before all the publicity started, to explain to them why it was removed... and to make clear that it had nothing to to do with the substance of the event. 

We want to make sure that the authentic co-hosts understand the types of activities we're seeing with these threat actors, who are looking to influence and interfere with our public debate.

Is Facebook sharing information with other services like Twitter, YouTube and Reddit to help them identify fake accounts that the same bad actors are creating on their sites?

Before we made the public announcement, we sat down with folks from law enforcement, and we also engaged with some of the other tech companies who were in a position to either use information we could give them, or maybe be able to give us insights.

Can you say what those other platforms are?

Because of the nature of these investigations, I can't talk about specific partnerships. But we've been engaged and a number of the platforms have been thinking deeply about "How can we engage more?"

Is there a plan not to just deal with this, but actually seek out the next threat and proactively stop it before it becomes a problem too? Do you have teams of people thinking of worst-case scenarios?

We do. There are different types of teams that work in that space, and as you'd imagine, they do technical steps as well as information steps.

This is another place where we're collaborating with academia and the expert community, collaborating with government experts, and collaborating with the other platforms.

One we can talk about is certainly our partnership with social science research community. This is now called Social Science One, where one of the questions has been how we get data that the social scientist community needs to be able to study all these phenomenon out to academia in a way that makes it accessible for study, and also ensures the privacy of our users is protected too.

Are there other people in the room deciding how to deal with fake accounts? Is it just you and your engineers, or are you also bringing in ethicists, lawyers, PR people or other kinds of experts? 

There are many people and different parts of the company working on this, and they've been working on this for a long time. If you think about information operations and our elections integrity work, there's a lot of different teams that are tackling different parts of the puzzle and there's no one piece that can be a solution by itself. 

As for who's in the room, I think when we're considering how to take an action, you absolutely will have the investigators. You'll have the policy teams that are thinking about what should be permitted and not be permitted on the platform and all the implications that has. You have the legal team that is thinking about the legal consequences here. 

It's a complex enough decision that we make sure there are a lot of different people within the company that have visibility as we're making that decision.

Cambridge Analytica: Everything you need to know about Facebook's data mining scandal.

Security:  Stay up-to-date on the latest in breaches, hacks, fixes and all those cybersecurity issues that keep you up at night.