CNET también está disponible en español.

Ir a español

Don't show this again

Christmas Gift Guide
Software

Facebook forced to respond to violent sexist images

Facebook says users who share "cruel and insensitive content" will have to post it using their real names, in response to a campaign.

Facebook says users who share "cruel and insensitive content" will have to post it using their real names, after a social media campaign against violent sexist images persuaded 15 brands to pull their advertising from the company.

In a lengthy blog post responding to the UK-based Everyday Sexism project's #FBrape campaign, the social network also promised to update its policies and improve training for those it employs to moderate content that's been flagged by users.

"A few months ago we began testing a new requirement that the creator of any content containing cruel and insensitive humour include his or her authentic identity for the content to remain on Facebook," writes Marne Levine, Facebook's vice president of global public policy.

"As a result, if an individual decides to publicly share cruel and insensitive content, users can hold the author accountable and directly object to the content. We will continue to develop this policy based on the results so far, which indicate that it is helping create a better environment for Facebook users."

This addresses the issue of groups with such charming names as 'Slapping hookers in the face with a shoe', 'This is why Indian girls are raped' and 'I kill bitches like you'. These groups can be anonymously run and post content of the kind shown above.

While I don't think it's appropriate to share many of the images featured in the Everyday Sexism campaign, if you want to see examples of the kind of demeaning rubbish these morons think is funny, I encourage you look at its twitter image feed, or this blog post by Sunny Hundal.

It's currently unclear exactly what will change. It may be that Facebook will require all groups to be run by a named user, whose identity has been authenticated in some way, or only if it receives serious complaints about its content. An obvious concern would be that this requirement could impinge on the free speech of groups with legitimate reasons for staying anonymous -- employees discussing their company's illegal practices, for example.

The people behind the campaign were quick to praise Facebook's response.

"It is because Facebook has committed to having policies to address these issues that we felt it was necessary to take these actions and press for that commitment to fully recognize how the real world safety gap experienced by women globally is dynamically related to our online lives," explains the author and activist Soraya Chemaly, who helped start the campaign.

"Facebook has already been a leader on the Internet in addressing hate speech on its service," says the Women, Action & the Media group in a statement. "We believe that this is the foundation for an effective working collaboration designed to confront gender-based hate speech effectively. Our mutual intent is to create safe spaces, both on- and off-line. We see this as a vital and essential component to the valuable work that Facebook is doing to address cyber-bulling, harassment and real harm."

After widespread coverage in the traditional and social media in the past week, brands such as Nissan and Nationwide removed their ads from Facebook, while Dove said it was "aggressively working with Facebook to resolve this issue." American Express said it was "in close contact with Facebook" to understand how its ads appeared next to violent sexist images.

Have you complained to Facebook about offensive images? Did the company act in a reasonable way? What can it do to make it a safer place? Let me know in the comments, or on our own Facebook page.