Twitter creates its rules to keep people safe on Twitter, and they continuously evolve to reflect the realities of the world it operates within. Its primary focus is on addressing the risks of offline harm, and research shows that dehumanizing language increases that risk. Late last year, Twitter took a new approach by seeking feedback from the Arabic, English, Spanish and Japanese public on an update to its hateful conduct policy around dehumanization. As a result, after months of conversations and the feedback from the public, external experts and its own teams, Twitter is expanding its rules against hateful conduct to include language that dehumanizes others on the basis of religion.
Starting today, Twitter will require Tweets like these to be removed from the platform when they’re reported:
If reported, Tweets that break this rule sent before today will need to be deleted, but will not directly result in any account suspensions because they were Tweeted before the rule was set.
Why start with religious groups?
Last year, Twitter asked for feedback to ensure it considered a wide range of perspectives and to hear directly from the different communities and cultures who use Twitter around the globe. All Arabic speakers were invited to submit their responses via Twitter MENA’s Arabic blog. The same applied to English, Spanish and Japanese speakers. In two weeks, Twitter received more than 8,000 responses from people located in more than 30 countries.
Some of the most consistent feedback it received included:
• Clearer language – Across languages, people believed the proposed change could be improved by providing more details, examples of violations, and explanations for when and how context is considered. Twitter incorporated this feedback when refining this rule, and also made sure that it provided additional detail and clarity across all its rules.
• Narrow down what’s considered – Respondents said that “identifiable groups” was too broad, and they should be allowed to engage with political groups, hate groups, and other non-marginalized groups with this type of language. Many people wanted to “call out hate groups in any way, any time, without fear.” In other instances, people wanted to be able to refer to fans, friends and followers in endearing terms, such as “kittens” and “monsters.”
• Consistent enforcement – Many people raised concerns about Twitter’s ability to enforce its rules fairly and consistently, so Twitter developed a longer, more in-depth training process with its teams to make sure they were better informed when reviewing reports. For this update it was especially important to spend time reviewing examples of what could potentially go against this rule, due to the shift outlined earlier.
Through this feedback, and Twitter’s discussions with outside experts, it also confirmed that there are additional factors it needs to better understand and be able to address before it expanded this rule to address language directed at other protected groups, including:
• How does it protect conversations people have within marginalized groups, including those using reclaimed terminology?
• How does it ensure that its range of enforcement actions take context fully into account, reflect the severity of violations, and are necessary and proportionate?
• How can – or should – Twitter factor in considerations as to whether a given protected group has been historically marginalized and/or is currently being targeted into its evaluation of severity of harm?
Twitter will continue to build a platform for the global community it serves and ensures the public helps shape its rules, product, and how it works. As it looks to expand the scope of this change, the company will share what it learns and how it addresses it within its rules. It will also continue to provide regular updates on all of the other work it’s doing to make Twitter a safer place for everyone