There's one thing
CTO Mike Schroepfer always tells new engineers starting work at the company.
"There are going to be a lot of people in the world who have opinions and high expectations of the work we do," he says. "And that is entirely appropriate given the reach and scale of the products we build -- it's simply our job to live up to those expectations, every day."
Whether Facebook is living up to those expectations is an ongoing debate, and this year we've seen the stakes rise again as the impact of the coronavirus pandemic and a US presidential election have echoed across the internet.
Many of the challenges Facebook tackles -- removing violent content, fighting the spread of misinformation, preventing election interference, encouraging civic engagement, supporting communities during crisis -- might seem on the face of it like policy issues. But wherever there's a policy challenge, there's also a tech challenge, and it's Schroepfer who's in charge of solving it.
Speaking at Web Summit, the European tech conference (which is virtual this year), Schroepfer particularly focused on the AI solutions he's been in charge of building to tackle hate speech. Three years ago, no hate speech on the platform was caught by automated systems, but today 94.5% of it is picked up by Facebook's artificial intelligence tools. Facebook-owned Instagram has also seen a significant increase in the amount of content being automatically removed, he added.
"This is a story of tremendous progress," Schroepfer said. "But we're not done. There is fair criticism leveled every day where we miss a piece of content."
Both in his Web Summit session and in a briefing with reporters earlier in the week, Schroepfer addressed issues raised in the Netflix documentary The Social Dilemma about Facebook's platforms being optimized for attention and built in a way that polarizes debate.
The Facebook exec, who's been at the company for almost 12 years, said it was important to him that people feel as though they want to spend time on the social network and are getting out of it what they want. Even if problematic content can be temporarily engaging, it's ultimately detrimental to how people feel about Facebook and how much they want to use it, he said.
"The incentives are aligned for us to build what we call long-term value, which is, Do you launch our apps on your phone willingly because you like what you see and it provides value?" he said. "If it devolves into polarized fights with friends and family members, we know how that ends -- that ends in people not using our products."
Schroepfer said he's been working hard to try to understand when people feel they're having meaningful interactions with each other, which he describes as a "really hard thing to measure." But trying to find ways to help people engage in a civil and empathetic manner is something he's keen on. "This is part of what I'm here to do because I think the alternatives, of shutting down and stifling connection and communication, making it more expensive for people to communicate with each other, doesn't seem like the right answer to me," he said.
When asked whether he's in denial about the harm Facebook is doing in the world, Schroepfer pointed to the fact that the company had massively reduced the cost of people communicating with each other. The example he gave was of text messaging, which was once something people spent "many tens of billions of dollars on" but that now costs almost nothing, thanks to services such as Facebook-owned WhatsApp.
"I get up every morning saying, I want to give people free tools to talk to each other as easily as possible anywhere around the globe, and I want them to do it as safely and securely as possible," Schroepfer said. "Despite the challenges that we in the broader technology industry have faced, I'm still here because I believe that technology is one of the best ways we can improve the lives of people every single day."