X

Twitter's fight against harassment kicks into a higher gear

The social network releases more details on how it will punish those engaging in abusive and hateful behavior.

Terry Collins Staff Reporter, CNET News
Terry writes about social networking giants and legal issues in Silicon Valley for CNET News. He joined CNET News from the Associated Press, where he spent the six years covering major breaking news in the San Francisco Bay Area. Before the AP, Terry worked at the Star Tribune in Minneapolis and the Kansas City Star. Terry's a native of Chicago.
Terry Collins
6 min read

Holding her sign high in the air, Jessie Woletz had a simple message for Twitter: "#StopTheHate." 

"It's really hard to trust Twitter right now," said Woletz, standing with a handful of protesters Friday night outside Twitter's headquarters in San Francisco, pleading for the social network to curtail harassment. "They say they're going to be doing something about it. I hope it will be better than what it is."

img-2002

Jessie Woletz demonstrates her displeasure with Twitter during a protest outside the company's San Francisco headquarters Friday night. 

Terry Collins/CNET

On Monday, Twitter said it was moving in that direction as it began enforcing updated policy aimed at reducing the amount of abusive and hateful content on the platform. Among the changes are prohibitions against users promoting violence and hate in their usernames or bios; the prospect of permanent suspension of accounts threatening violence, serious physical harm or death; and a ban of accounts featuring hate symbols and images. 

As of mid-afternoon, at least 20 notable accounts had been either banned or suspended as a result of the rules, according to the Southern Poverty Law Center, Anti-Defamation League and the UK-based Resisting Hate, all watchdog groups. The banned include Jayda Fransen, the deputy leader of fringe group Britain First, as well as her group's official account; white nationalist Jared Taylor and his American Renaissance group; and the American Nazi Party

Twitter declined to share details about specific accounts on Monday, acknowledging that its efforts are a work in progress and that it will likely tweak how it applies its policies. "We'll evaluate and iterate on these changes in the coming days and weeks," the company said in a blog post.

The updates weren't unexpected and came as scheduled on Twitter's safety calendar, which specifies when changes to halt abuse take effect. At users' urging, the social network vowed last year to curb chronic harassment and hate speech aimed at women and minorities, even as it struggles to achieve its stated mission of providing a platform for free expression.

Twitter CEO Jack Dorsey's promises of changes to the company's harassment policies and more transparency on how it will protect its 330 million users from abusive behavior have been met with mixed emotions.  

"They are taking their responsibilities quite seriously. Twitter realizes they have a series of problems on their hands, and they are facing a lot of pressure," Stephen Balkam, CEO of the Family Online Safety Institute and a member of Twitter's Trust and Safety Council, said Saturday.  "I think this has been a year of playing catch-up for the company, and they've come under scrutiny like never before." 

Watch this: Twitter's tumultuous 2017

Supporters of the banned accounts quickly took to Twitter to express their anger at the suspensions. Shortly after the banned accounts ceased activity, #TwitterPurge became a global trending hashtag.

Richard Spencer, a prominent white nationalist who elicited Nazi-style salutes after celebrating Donald Trump's election as president last year, tweeted that he didn't see any "systematic method" to the bans. 

Twitter removing accounts extends beyond the "alt-right," the term used to describe a fringe group of white supremacists and neo-nazis. The social network suspended at least two accounts linked to the New Black Panther Party, which the Anti-Defamation League has said uses "inflammatory bigotry" and "calls for violence. Members of the original Black Panthers have denounced the group. 

Then there's Jared Taylor, who runs American Renaissance, said he just about fell out of his chair when he heard his personal account and his organization's account were suspended. Taylor, who describes himself as a "race realist," said he has never promoted violence.

"Not even my worst enemies have ever accused me of condoning or promoting or even hinting at violence," Taylor said in a phone interview. "So long as you don't break the law or defame someone, I thought free speech was an exchange of lively ideas, but apparently Twitter doesn't think so."

Even watchdog groups said they were puzzled by some of the applications of the policy. For example, David Duke, a former Ku Klux Klansman, was still able to tweet after the new policy went into effect.  

"I can't figure it out right now, don't know why Duke is up and American Renaissance is down," said Heidi Beirich, director of the SLPC Intelligence Project. "I don't know where the cutoff line is."

Meanwhile, as the hate lingers, Twitter has made a number of changes throughout the year. In July, for instance, it said it had disciplined 10 times more accounts than it did in 2016. In October, Dorsey said in a tweetstorm that more changes were on the way, responding to a #WomenBoycottTwitter protest urging folks to not tweet for a day to force Twitter to improve how it vets content. That came on the heels of #MeToo, the hashtag campaign inspired by the allegations made against Hollywood mogul Harvey Weinstein, which sparked an international movement encouraging women from all walks of life to speak up about sexual assaults and harassment.

Last month, Twitter temporarily suspended its verification process after drawing outrage for giving its official blue-and-white check mark to accounts of noted white supremacists.

Everything in context

Twitter said Monday it will have a range of options for enforcement, with a focus on context and behavior. For example, the company said that although some tweets may seem abusive on the surface, they may not be "when viewed in the context of a larger conversation."

Also, Twitter said that responses will depend on the severity of a tweet and the user account's previous record. "For example, we may ask someone to remove the offending tweet before they can tweet again," the company said on its hateful conduct policy page. "For other cases, we may suspend an account."

However, there are some exceptions. Twitter's rules against organizations whose accounts promotes violence may be subject to discipline and even suspension. Yet the company says those same rules don't apply to "military or government entities and we will consider exceptions for groups that are currently engaging in (or have engaged in) peaceful resolution."

The new enforcement policy applies directly to "violent extremist groups," and the company said "groups with representatives elected to public office through democratic elections" are excluded. This may mean that those threatening tweets from President Donald Trump to North Korea aren't covered.

A Twitter spokesperson on Monday declined to provide further comment about the rule. 

Given the gray areas the company sees in tweeting, it promises to acknowledge if it goes too far in some cases.

"In our efforts to be more aggressive here, we may make some mistakes and are working on a robust appeals process," the company said in its Monday blog post.

As far as the 20 accounts either suspended or banned, Resisting Hate co-founder Roanna Carelton-Taylor wishes there would be more, but figures Twitter is doing it in waves. "Haters are getting better at playing the game and learning how to communicate their venom in less reportable ways," she said.

Balkam, a member of Twitter's safety council, an advisory group of more than 60 organizations and experts working to help prevent abuse on the social network, said the policies are a lot clearer than a year ago. 

Now, it's a matter of executing them, he said.  

"The real issue is going to come over the next six to nine months as to how they use these policies in terms of human review, and also how their machine learning, their algorithms and their artificial intelligence are going to pick this stuff up and determine how accurate it is," he added.

After an hour of protesting on Friday, Woletz, a 34-year-old San Francisco resident who lives not far from Twitter's headquarters, said she'll be back if she thinks the company isn't doing enough to eradicate the hate. 

Woletz cited a series of graphic anti-Muslim tweets and videos from Britain First's Fransen that was retweeted by Trump on Dec. 1 that Twitter didn't remove. The company said the tweets somehow didn't violate its policies, despite being widely condemned by critics

"I want Twitter to stop the doublespeak," Woletz said. "They need to do a lot more than they've done."

First published Dec. 18, 6 a.m. PT.
Update, 1:45 p.m. PT: Adds details about accounts being suspended or banned as a result of Twitter's new rules.
Update, 2:33 p.m. PT: Adds comments from Jared Taylor of American Renaissance, the SPLC and others about Twitter's application of the new policies. 

iHate: CNET looks at how intolerance is taking over the internet.

The Smartest Stuff: Innovators are thinking up new ways to make you, and the things around you, smarter.