Twitter has announced some changes in its policies, along with some revised enforcement options, designed to combat the abusive users of its social network. The changes include expanding the definitions of Twitter's violent threats policy, along with more options for the company's service team to lock out accounts.
Twitter's violent threats policy now include "threats of violence against others or promot[ing] violence against others" in addition to "direct, specific threats of violence against others". The company says:
Our previous policy was unduly narrow and limited our ability to act on certain kinds of threatening behavior. The updated language better describes the range of prohibited content and our intention to act when users step over the line into abuse.
Twitter's support team now has the option to lock out accounts that generate abusive comments for specific periods of time, which they feel will help if many users start posting derogatory comments on one person or group. Those users will be contacted to delete the offending posts before their account can be reactivated.
Finally, Twitter is testing out a new feature that will allow them to spot abusive posts. The company said:
This feature takes into account a wide range of signals and context that frequently correlates with abuse including the age of the account itself, and the similarity of a Tweet to other content that our safety team has in the past independently determined to be abusive. It will not affect your ability to see content that you've explicitly sought out, such as Tweets from accounts you follow, but instead is designed to help us limit the potential harm of abusive content. This feature does not take into account whether the content posted or followed by a user is controversial or unpopular.
These changes come one day after Twitter gave its users the option to receive direct messages from any other user, even those that are not following them.