February 2022 Update

The TCAP newsletter provides updates on the month’s development of the platform, key events such as our monthly office hours, and the TCAP statistics!

Our highlights:

  • This month (01.02-28.02) the TCAP identified 1,800 URLs containing terrorist content and sent 936 alerts to 30 tech companies. 73% of this content is now offline.

  • We published a blogpost explaining the TCAP Crisis Protocol Policy on our website. The Policy guides our actions when an emergency incident occurs, by ensuring we have provisions in place to alert the appropriate authorities and mitigate the threat posed by online violent extremist content.

  • We announced the TCAP’s hashing and hash-sharing capability. In a blogpost, we explained that the TCAP is now hashing all URLs that contain terrorist content and will be sharing this with the GIFCT’s hash-sharing consortium to further achieve our mission to support smaller tech companies with removing terrorist content.


Development Updates

Hashing Capability

We announced that we are now hashing all URLs containing terrorist content and submitted to the TCAP. These unique hashes will be shared with the GIFCT’s hash-sharing consortium, which forms a shared industry database of “perceptual hashes” of terrified images and videos produced by terrorist entities on groups designated by the United Nations. This action will further achieve our mission to support smaller tech companies with removing terrorist content. You can find the announcement here.

A hash is a unique digital fingerprint which will be created for each piece of terrorist content. The TCAP will hash verified terrorist content to help the tech sector, particularly smaller tech companies, with automated decision making when moderating terrorist content. Additionally, the hashing and archiving of all TCAP content means it can be used for research to further contribute to academic study and policy-making decisions.

Hashing visual.png

The image above visualises the conversion of a file from its original format to a numerical and algebraic format using a hashing function. The output of the hashing function can then be shared with tech platforms to pre-emptively ban verified content without viewing user data. The hashing of this newly created content posted on smaller tech platforms will provide a bespoke service that will contribute to automated content moderation, and ensuring the same content cannot be reposted on other platforms.

At Tech Against Terrorism we recognise how shared hashing databases offer the authority to determine what content is permissible across the entire internet. Therefore, transparency and human verification are vital counterbalances to the danger of these “content cartels”, or arrangements between tech platforms to remove content without adequate oversight. More on transparency-by-design later in this newsletter.


Policy Updates

Crisis Protocol

We are delighted to announce the publication of our Crisis Protocol which outlines the steps Tech Against Terrorism takes in a crisis event. The policy consists of three sections:

  • Pre-incident- what we do when we encounter a potential threat-to-life
  • During incident- what we do in an active crisis event
  • Post-incident- how we respond to crisis events after an event has occurred

The Crisis Protocol guides our actions when an emergency incident occurs, by ensuring we have provisions in place to alert the appropriate authorities and mitigate the threat posed by online violent extremist content. In the event of a potential threat-to-life, the Crisis Protocol Policy outlines the steps that TCAP staff take to evaluate the credibility and imminency of the threat-to-life and what proportionate actions should be taken. You can see our full threat-to-life protocol below:

Threat to life protocol


The TCAP Statistics

The TCAP statistics detail the number of automated terrorist content alerts we have sent. To reiterate, the TCAP alerts tech companies when we find terrorist content on their platforms. With terrorist content, we mean official content produced by one of the terrorist groups included in the first version of the TCAP, based on our Inclusion Policy. This entails that the TCAP identifies terrorist content, after which we verify whether that content falls within our scope before alerting tech platforms.

For weekly TCAP statistics, please see our Twitter, @TCAPAlerts.

The following graph shows our metrics for the month of February, as well as our total statistics.

Feb stats.png


Automated Terrorist Content Alerts

Automated Terrorist Content Alerts
The TCAP identifies, collects, verifies, archives, processes, and alerts terrorist content.


What’s Next:

  • This month, we are launching our Transparency report which will detail our metrics, policies, and development details during the first 12 months of the TCAP! With this report, we ensure the TCAP is built through transparency-by-design and to abide by our own transparency guidelines. Stay tuned for release!

  • Alongside the Transparency Report, we will be publishing a blogpost providing an analysis of the statistics for the first year of the TCAP. This will use TCAP data to compare Islamist terrorist versus far-right terrorist use of the internet, explaining discrepancies in volume of material collected by the TCAP and content removal rates.

  • We are expanding our Inclusion Policy in April, announcing which terrorist groups we will start including in the TCAP based on the legal designation of terrorist entities by democratic governments and supranational institutions. You can view our current Inclusion Policy here.

  • We have begun developing a promotional video on the TCAP! The video will explain in depth the role played by the TCAP in disrupting online terrorist content, and how this supports tech platforms.

  • We have officially relaunched the Tech Against Terrorism podcast this month. You can listen to the first 3 episodes now on our dedicated website or wherever you get your podcasts! Stay tuned for the rest of the season 2, including a special episode on the TCAP!