Twitter is once again revising its rules to rein in what it deems abusive behavior after updating its user rules to tackle revenge porn postings earlier this year.
Though the San Francisco-based social media site said in a blog post that it will always "embrace and encourage" different opinions, it will not "tolerate behavior intended to harass, intimidate, or use fear to silence another user's voice."
The change comes during a time when radical terrorist groups, such as the Islamic State in Iraq and Syria (ISIS), have gained a strong presence on Twitter and uses the site to spread its messages and communicate with followers. And despite ongoing efforts to combat online abuse and harassment (an issue Twitter itself is aware of), tech companies like Twitter and Facebook are still seen as not doing enough.
Twitter's Abusive Behavior Policy already bans messages that threaten or promote violence and terrorism. But its rules now include new language for what's considered abusive behavior (new text is highlighted by CNET):
You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories.
In addition to allowing users to block or mute other accounts that are abusive, Twitter can also ask offending users to delete their tweets if they are found violating the company's rules. If users do not comply, the company can lock a user out of their account altogether.