The democratizing power of the internet has been a tremendous boon for individuals, activists, and small businesses all over the world. But bad actors have long tried to use it for their own ends. White supremacists used electronic bulletin boards in the 1980s, and the first pro-al-Qaeda website was established in the mid-1990s. While the challenge of terrorism online isn’t new, it has grown increasingly urgent as digital platforms become central to our lives. At Facebook, we recognize the importance of keeping people safe, and we use technology and our counterterrorism team to do it.
Defining Terrorism
We define a terrorist organization as: “Any non-governmental organization that engages in premeditated acts of violence against persons or property to intimidate a civilian population, government, or international organization in order to achieve a political, religious, or ideological aim.” (Updated on April 25, 2018 to clarify our definition applies to terrorist organizations.)
Our definition is agnostic to the ideology or political goals of a group, which means it includes everything from religious extremists and violent separatists to white supremacists and militant environmental groups. It’s about whether they use violence to pursue those goals.
Our counterterrorism policy does not apply to governments. This reflects a general academic and legal consensus that nation-states may legitimately use violence under certain circumstances. Certain content around state-sponsored violence, though, would be removed by our other policies, such as our graphic violence policy.
Enforcement
Facebook policy prohibits terrorists from using our service, but it isn’t enough to just have a policy. We need to enforce it. Our newest detection technology focuses on ISIS, al-Qaeda, and their affiliates — the groups that currently pose the broadest global threat. (We described some of our algorithmic tools last June and offered an update on our progress in November.)
We’ve made significant strides finding and removing their propaganda quickly and at scale. Detection technology has been critical to our progress, as has our counterterrorism team of 200 people, up from 150 last June. That team will continue to grow.
New Enforcement Data
While our metrics remain in development, today we want to provide updated data about our enforcement against ISIS, al-Qaeda, and their affiliates in the first quarter of 2018.
We’re removing more content. In Q1 we took action on 1.9 million pieces of ISIS and al-Qaeda content, about twice as much from the previous quarter. (“Taking action” means that we removed the vast majority of this content, and added a warning to a small portion that was shared for informational or counter speech purposes. This number likely understates the total volume, because when we remove a profile, Page or Group for violating our policies, all of the corresponding content becomes inaccessible; but we don’t go back through to classify and label every individual piece of content that supported terrorism.)
We find the vast majority of this content ourselves. In Q1 2018, 99% of the ISIS and al-Qaeda content we took action on was not user reported. In most cases, we found this material due to advances in our technology, but this also includes detection by our internal reviewers. There is also a small percentage of content where people report a profile, Page, or Group — and we don’t remove the entire profile, Page or Group because as a whole they do not violate our policies, but we do remove specific content within them that’s in breach of our standards. The Q1 2018 figure aligns with the figure we released in November, but we have evolved how we calculate this figure, most importantly by including content that is re-shared in the calculation.
We take down newly uploaded content quickly. Content uploaded to Facebook tends to get less attention the longer it’s on the site — and terrorist material is no different. As we have improved our enforcement, we’ve prioritized the work to identify newly uploaded material. In Q1 2018, the median time on platform for newly uploaded content surfaced with our standard tools (including both user reports and content we find ourselves) was less than one minute.
We remove not just new content but old material too. We have built specialized techniques to surface and remove older content. Of the terrorism-related content we removed in Q1 2018, more than 600,000 pieces were identified through these mechanisms. We intend to do this for more content in the future. In Q1 2018 our historically focused technology found content that had been on Facebook for a median time of 970 days. (From a measurement perspective, we do not think a combined measure from both the contemporary tools and those designed to find old content is particularly useful for understanding the situation today, but for curious readers it’s about 52 hours.)
We’re under no illusion that the job is done or that the progress we have made is enough. Terrorist groups are always trying to circumvent our systems, so we must constantly improve. Researchers and our own teams of reviewers regularly find material that our technology misses. But we learn from every misstep, experiment with new detection methods and work to expand what terrorist groups we target.
In 1984, the Irish Republican Army (IRA) failed in an assassination attempt against British Prime Minister Margaret Thatcher. In claiming responsibility, the IRA warned that, “Today we were unlucky, but remember that we only have to be lucky once—you will have to be lucky always.” The quote serves as a reminder that counterterrorism success is fleeting and that metrics never fully capture the contours of the battle against it.
This principle motivates counterterrorism professionals all over the world, and it applies at Facebook as well: one failure is too many. That’s why we work every day to get better.
Written By Monika Bickert, Vice President of Global Policy Management and Brian Fishman, Global Head of Counterterrorism Policy, Facebook