X

Social networks struggle to shut down racist abuse after England's Euro Cup final loss

Social media users have been frustrated at having to perform moderation duties to keep racist abuse in check.

Katie Collins Senior European Correspondent
Katie a UK-based news reporter and features writer. Officially, she is CNET's European correspondent, covering tech policy and Big Tech in the EU and UK. Unofficially, she serves as CNET's Taylor Swift correspondent. You can also find her writing about tech for good, ethics and human rights, the climate crisis, robots, travel and digital culture. She was once described a "living synth" by London's Evening Standard for having a microchip injected into her hand.
Katie Collins
6 min read
gettyimages-1328237487

Bukayo Saka of England is consoled by head coach Gareth Southgate.

Laurence Griffiths/Getty Images

Sunday night marked a moment of national heartbreak among England's football fans as the country's squad came close to winning its first major international tournament in over half a century, before losing on a penalty shootout to Italy. It also marked another ugly incident of racism on social media, with some supporters hurling all their anger and frustration at the three players who had missed their penalty kicks -- all of whom happened to be Black.

While the national team and manager Gareth Southgate made it clear that the loss was something the whole team was shouldering together, some disgruntled supporters went on Twitter and Instagram and specifically targeted Marcus Rashford, Jadon Sancho and Bukayo Saka.

The vitriol presented a direct challenge to the social networks -- an event-specific spike in hate speech that required them to refocus their moderation efforts to contain the damage. It marks just the latest incident for the social networks, which need to be on guard during highly charged political or cultural events. While these companies have a regular process that includes deploying machine-automated tools and human moderators to remove the content, this latest incident is just another source of frustration for those who believe the social networks aren't quick enough to respond. 

To plug the gap, companies rely on users to report content that violates guidelines. Following Sunday's match, many users were sharing tips and guides about how to best report content, both to platforms and to the police. It was disheartening for those same users to be told that a company's moderation technology hadn't found anything wrong with the racist abuse they'd highlighted. 

It also left many users wondering why, when Facebook, for example, is a billion-dollar company, it was unprepared and ill-equipped to deal with the easily anticipated influx of racist content -- instead leaving it to unpaid good Samaritan users to report.

No gray areas when it comes to racism

For social media companies, moderation can fall into a gray area between protecting free speech and protecting users from hate speech. In these cases, they must judge whether user content violates their own platform policies. But this wasn't one of those gray areas. 

Racist abuse is classified as a hate crime in the UK, and London's Met Police said in a statement that it will be investigating incidents that occurred online following the match. In a follow-up email, a spokesman for the Met said that the instances of abuse were being triaged by the Home Office and then disseminated to local police forces to deal with.

Twitter "swiftly" removed over 1,000 tweets through a combination of machine-based automation and human review, a spokesman said in a statement. In addition, it permanently suspended "a number" of accounts, "the vast majority" of which it proactively detected itself. "The abhorrent racist abuse directed at England players last night has absolutely no place on Twitter," said the spokesman.

Meanwhile, there was frustration among Instagram users who were identifying and reporting, among other abusive content, strings of monkey emojis (a common racist trope) being posted on the accounts of Black players.

According to Instagram's policies, using emojis to attack people based on protected characteristics, including race, is against the company's hate speech policies. Human moderators working for the company take context into account when reviewing use of emojis.

But in many of the cases reported by Instagram users in which the platform failed to remove monkey emojis, it appears that the reviews weren't conducted by human reviewers. Instead, their reports were dealt with by the company's automated software, which told them "our technology has found that this comment probably doesn't go against our community guidelines."

A spokeswoman for Instagram said in a statement that "no one should have to experience racist abuse anywhere, and we don't want it on Instagram." 

"We quickly removed comments and accounts directing abuse at England's footballers last night and we'll continue to take action against those that break our rules," she added. "In addition to our work to remove this content, we encourage all players to turn on Hidden Words, a tool which means no one has to see abuse in their comments or DMs. No one thing will fix this challenge overnight, but we're committed to keeping our community safe from abuse."

Football's racism problem meets tech's moderation problem

The social media companies shouldn't have been surprised by the reaction. 

Football professionals have been feeling the strain of the racist abuse they suffer online -- and not just following this one England game. In April, England's Football Association organized a social media boycott "in response to the ongoing and sustained discriminatory abuse received online by players and many others connected to football."

English football's racism problem is not new. In 1993, the problem forced the Football Association, Premier League and Professional Footballers' Association to launch Kick It Out, a program to fight racism, which became a fully fledged organization in 1997. Under Southgate's leadership, the current iteration of the England squad has embraced anti-racism more vocally than ever, taking the knee in support of the Black Lives Matter movement before matches. Still, racism in the sport prevails -- online and off.

On Monday, the Football Association strongly condemned the online abuse following Sunday's match, saying it's "appalled" at the racism aimed at players. "We could not be clearer that anyone behind such disgusting behaviour is not welcome in following the team," it said. "We will do all we can to support the players affected while urging the toughest punishments possible for anyone responsible."

Social media users, politicians and rights organizations are demanding internet-specific tools to tackle online abuse -- as well as for perpetrators of racist abuse to be prosecuted as they would be offline. As part of its "No Yellow Cards" campaign, the Center for Countering Digital Hate is calling for platforms to ban users who spout racist abuse for life.

In the UK, the government has been trying to introduce regulation that would force tech companies to take firmer action against harmful content, including racist abuse, in the form of the Online Safety Bill. But it has also been criticized for moving too slowly to get the legislation in place.

Tony Burnett, the CEO of the Kick It Out campaign (which Facebook and Twitter both publicly support), said in a statement Monday that both the social media companies and the government need to step up to shut down racist abuse online. His words were echoed by Julian Knight, member of Parliament and chair of the Digital, Culture, Media and Sport Committee. 

"The government needs to get on with legislating the tech giants," Knight said in a statement. "Enough of the foot dragging, all those who suffer at the hands of racists, not just England players, deserve better protections now."

As pressure mounted for them to take action, social networks have also been stepping up their own moderation efforts and building new tools -- with varying degrees of success. The companies track and measure their own progress. Facebook employs its independent oversight board to assess its performance.

But critics of the social networks also point out that the way their business models are set up gives them very little incentive to discourage racism. Any and all engagement will increase ad revenue, they argue, even if that engagement is people liking and commenting on racist posts.

"Facebook made content moderation tough by making and ignoring their murky rules, and by amplifying harassment and hate to fuel its stock price," former Reddit CEO Ellen Pao said on Twitter on Monday. "Negative PR is forcing them to address racism that has been on its platform from the start. I hope they really fix it."