Twitter plans to take more steps to combat user harassment on its network.
The social network on Wednesday unveiled another round of updates, including being more proactive in identifying abusive tweeters, giving users more control on what tweets they do or don't want to see, and offering more transparency on how the platform is curbing abusive behavior.
The latest updates are based on feedback provided by users since late December on how to improve Twitter. That feedback was solicited by CEO Jack Dorsey, who tweeted a request for users' suggestions. Abuse was one of four major themes and has now become a top priority.
"We've already seen an impact from these updates," Ed Ho, Twitter's vice president of engineering, said in a blog post. The new updates will be appearing in the next few days and weeks, Twitter said.
These updates come at a time as Twitter has been in overdrive ramping up its long-awaited efforts to curb abusive behavior. With harassment issues dating back several years, Twitter announced three changes earlier last month it believes will help curtail abusive tweets, potentially sensitive content and accounts infiltrating the platform.
Also last month, Twitter coyly began limiting the reach of accounts it believes engage in abusive behavior.
On Wednesday, Twitter said it is now identifying accounts it suspects of engaging in abusive behavior, based on based on algorithms and engineers watching -- even if the account hasn't been reported by a user.
"Since these tools are new, we will sometimes make mistakes, but know that we are actively working to improve and iterate on them everyday," Ho said.
Twitter is still "playing catch-up," said Stephen Balkam, CEO of the Family Online Safety Institute, a member of Twitter's Trust and Safety Council, a group of more than 60 organizations and experts working to prevent abuse.
"You can have all of the tools you want, but if you don't back it up with coding and actual humans reviewing and taking stuff down, then all of the great pronouncements will amount to very little," he said.
The social network also said that based on user feedback, its mute feature is being further expanded beyond muting notifications based on keywords, phrases and entire conversations. Now users can mute from their timeline and determine how long the content can be muted for -- one day to indefinitely.
"This is what we've been asking for, people having the capabilities to block or mute accounts they don't want to see," said Jamia Wilson, executive director of Women, Action & the Media (WAM), another member of Twitter's safety council. "We have to do a better job of holding abusers accountable for their actions."
Twitter also said it's going to do a better job of notifying users when they report abuse on their accounts or other users' accounts. Instead of an email, Twitter will notify users in the notifications tab when they have received a report and if any action was taken.
Balkam, head of the Family Online Safety Institute, said Twitter has to keep searching for that "fine line" between protecting the safety of its 319 million monthly users and protecting free speech.
"I describe it as walking a tight rope that's on fire, so you have to move pretty quickly," he said.
Solving for XX: The industry seeks to overcome outdated ideas about "women in tech."
Crowd Control: A crowdsourced science fiction novel written by CNET readers.