CRISIS PROTOCOL POLICY
Data collection for the Terrorist Content Analytics Platform (TCAP) requires a wide-range of open-source intelligence (OSINT) across a variety of tech platforms. This data collection is targeted towards areas where terrorist and violent extremists spread propaganda, communicate, and recruit. Throughout our investigations, there is a possibility of finding data which gives information about an ongoing or future attack. As such, we have developed a Crisis Protocol Policy which covers three key areas of emergency incident management. These areas are pre-incident, during incident, and post-incident. Our Crisis Protocol Policy aims to be flexible to ensure that we can handle critical incidents in the most effective way possible. This Crisis Protocol Policy guides our actions when an emergency incident occurs, by ensuring we have provisions in place to alert the appropriate authorities and mitigate the threat posed by online violent extremist content.
Our Crisis Protocol Policy is based on similar policies created by the UK Police and Home Office. We aim to keep our Crisis Protocol Policy updated based on the development of the TCAP and aim to enhance the function of the TCAP as part of our crisis response workflow.
In the event of a potential threat-to-life, the Crisis Protocol Policy outlines the steps that TCAP staff take to evaluate the credibility and imminency of the threat-to-life and what proportionate actions should be taken.
A threat-to-life can be considered as:
Real and immediate threat to a loss of life
Threat to cause serious harm
Threat of injury to another
A threat to life also includes:
- serious sexual assault
Our assessment is based on considering the intent and capability of a potential attacker and collating intelligence to share with the appropriate law enforcement agencies, each threat-to-life will be considered as low, medium, or high, and is monitored for status change. We consider our ethical responsibility of reporting a threat-to-life as overriding the entities within the TCAP Inclusion Policy. While the Inclusion Policy may be used to support our report of a threat-to-life, association with a listed entity is not necessary for us to report a credible threat-to-life to authorities.
In the event of a potential, credible threat-to-life, we will inform the UK and local authorities, any relevant intelligence agencies, and continue to monitor the event. We will also ensure we keep an accurate archive of all relevant data, should it be needed.
In the event of a threat-to-life which cannot be verified as credible or immediate, such as in the event of doxing of a public figure, we will inform the relevant authorities and intelligence agencies. We will also continue to monitor the situation and escalate when necessary.
You can see our full threat-to-life protocol below, showing the workflow progression and key decision-making involved in our assessments.
As seen with the Christchurch attack in 2019, there is an increasing threat of terrorist and violent extremist attackers utilising tech platforms to livestream and document attacks. In the event of an attack which is being livestreamed, the priority of the TCAP is to limit the spread of the content by flagging it to content moderators across a wide range of platforms. While large tech platforms likely have the capability to immediately flag and remove duplicate versions of a livestream, it is highly likely that small tech platforms do not have this capability.
As with the pre-incident protocol, the potential threat-to-life involved in an ongoing crisis incident overrides the TCAP Inclusion Policy regarding upholding the safety and security of the public. In the event of a livestreamed attack, we will inform the UK police and any other relevant authorities with all available data.
Currently our alerting system sends alerts at 18:00 GMT daily, these alerts collate all URLs from the past 24 hours in one email to send to tech platforms. In the future, we will develop the TCAP to function as an immediate alerting system for all tech platforms to flag content from an attacker, whether it is an original livestream or a duplicate version. This will allow TCAP staff to override the regular alert function, to send immediate alerts to tech companies, with the ability to add information about the event and content.
As a second priority, we also archive livestreams and footage on ongoing incidents, this archive has multiple purposes. The archive may be used to support prosecutions of terrorist and violent extremist actors by ensuring evidence is reliable and from an original source. The archive may also be used in the future to support expansion of the TCAP Inclusion Policy if the attacker is designated as a terrorist entity by a democratic nation state or supranational organisation. Finally, the archive may also be used to train artificial intelligence algorithms to assist in automated content moderation, by training algorithms to identify potentially harmful content, content can be flagged to human moderators more quickly for further review.
As part of our regular TCAP data collection, we alert content which depicts attacks claimed by terrorist entities within the TCAP Inclusion Policy, In monitoring the designation lists of democratic nation states and supranational organisations, our ability to expand the Inclusion Policy to include more entities gives us a larger ability to flag terrorist content.
Our post-incident response to an emergency incident may also involve securely transferring intelligence data (such as livestream footage or other open-source data) to the relevant authorities.
In the future, we aim to incorporate the TCAP into existing incident response workflows, ensuring that theoretical responses are backed by the technological ability to efficiently flag content for moderation.