The Start-UP Business Advisors

Articles

INDUSTRY INSIGHTS, NEWS AND KNOW-HOW

 

Technology vs Terrorism

police-body-camera-watchguard-vista-camera_750xx1796-1012-0-185.jpg

Last week the terror attack which took place in Germany was streamed live on Twitch, an online video-gaming platform owned by Amazon. It was the gunman himself who filmed the 35-minute live stream in which he made anti-Semitic comments to the camera before shooting at the door of a synagogue and shooting dead two passers-by.

Twitch, which has an average of 1.3 million users on its platform at any given moment, have confirmed that only 5 people viewed the live broadcast. However, by the time the live stream was flagged and taken down 30 minutes later, 2,200 people had viewed the content. Though Twitch have said that the content did not surface in its recommendation feed, it appears the video was still shared countlessly by users to other platforms including Twitter.

Twitch quickly shared a hash of the video to GIFCT, the Global Internet Forum to Counter Terrorism. A hash is essentially a video fingerprint, which allowed other GIFCT member organisations, such as Facebook and YouTube, to detect whether the same content had been uploaded to their platforms. Although it is almost impossible to remove the video from the internet entirely, collaboration in this way facilitates the removal from all mainstream platforms.

Unfortunately, this is not the first time that technology has been exploited for terrorist causes.

Earlier this year, a gunman live-streamed his deadly attacks on mosques in Christchurch, New Zealand to Facebook. It took Facebook 30 minutes to remove the video and this was only after the content had been flagged by a user. By this time, hundreds of people had watched the attacks unfold live and the video had reached other platforms such as Twitter and YouTube.

 Facebook said it removed around 1.5 million versions of the video. However, months after the attack, some versions were still available, leading to heavy criticism of Facebook for failing to deal with the issue and amend its approach to managing violent content. The difficulty for social media companies is that some of the features that have made their technologies so successful: global reach; sharing of raw content; instantaneity and accessibility, also become their greatest vulnerabilities when exploited by terrorists and others with malicious intent. In fact, the German terrorist’s use of the English language during his livestream confirms his desire for global reach.

In the wake of these sorts of attacks, technology companies are fast realising that ‘cleaning up’ after the event is not enough. Yes, it is important to remove terrorist content quickly and to suspend accounts posting terrorist material, but tech companies understand they need to look at preventative measures too. However, this is a tall order; tech companies are not the police, nor do they have the same remit or powers. The German attacker for example had set up his Twitch account 2 months before the attack, and had attempted to live stream only once before, naturally none of which raised suspicions. In addition, the preventative measures currently being discussed are by no means fool-proof. 

Some tech companies have taken to hiring greater numbers of content reviewers to try to catch malicious content before or as it is being uploaded. Twitch for example have round the clock human and AI monitors working to moderate content. This methodology is imperfect for a number of reasons. First, vast resources are needed to be able to effectively moderate all content and users. By way of example, approximately 300 million photos are uploaded to Facebook every day, and Facebook employ about 30,000 people to work on their safety and security issues. Start-up and scale-up social media companies do not have these resources, meaning that they are left completely exposed to the actions of malicious users.

Secondly, part of the appeal of social media platforms is that the content is raw and not heavily moderated, and the platforms allow users a high level of freedom of expression and creativity. If users start to feel that a social media platform is watching them, restricting them or moderating the content they see, users will quickly migrate to other platforms which are less infringing. As social media platforms rely heavily on footfall for their success, a changed approach to content regulation could be the making or breaking of them.

Another preventative approach being adopted by some of the larger social media companies is collaboration. By way of example tech companies such as Facebook, Microsoft, Twitter and YouTube collaborate under initiatives such as Tech Against Terrorism and the GIFCT.

The partners use methods like knowledge sharing, AI solutions and counter-narrative messaging to prevent terrorist exploitation. Again, the difficulty is striking the right balance between protecting against terrorist exploitation whilst also upholding the importance of fundamental freedoms. These groups would not want to emulate the establishment; something which is likely to drive users away. Also, collaborations only really benefit the ‘big players’ in the tech world, leaving vulnerable start-up and scale-up social media companies to fend for themselves.

In the aftermath of the attacks in New Zealand and Germany, delayed video uploads were suggested as a way to limit the livestreaming of terrorist actions.

It was argued that a short delay would allow systems and content reviewers to catch violent content before it becomes available to the public. Although well-intentioned, this suggestion ignores the fact that users come to platforms such as Twitch specifically because of the livestreaming possibilities.

A delay in the upload of the video would defeat the purpose of Twitch; a platform where users can simultaneously livestream video games to other users as they play. By delaying uploads, the platform would be ‘killing the goose that laid the golden egg’ and users would be quick to move on to other less restrictive platforms.

In addition, the suggestion underestimates the enormity of the content moderation task.  Facebook alone has 2.4 billion monthly active users. To put this into context, if these users made up one country, they would be the most populous country in the world. The number is roughly equivalent to the populations of India and China combined. This is a huge number of active profiles to monitor, and the complexity doesn’t stop at scale. These users are spread across continents, using thousands of different languages, under the remits of hundreds of different governing systems and subject to a multitude of different laws. To think therefore that complete content control would be possible by way of delayed uploads would be naïve.

This brings us to our final preventative approach which tech companies, due to the nature of their work, are keen to tap into: artificial intelligence. Although seemingly the natural solution to terrorist exploitation of tech, AI is not currently as developed as some may hope.

Facebook have recently looked to collaborate with police forces to collect body camera footage captured during shooting training exercises. The intention is to use this footage to train algorithms to identify videos of shootings, so they can be detected and rapidly removed.

However, the recent attack involving Twitch perfectly exemplifies the difficulties of relying on AI to flag real-life violence: Twitch is predominantly used for video gamers to livestream themselves playing games which often simulate gun violence. This makes the AI training task all the more complex: not only must the AI detect a shooting video, it must also now discern between a simulated game shooting scenario and a real-life shooting. AI is not advanced enough to carry this out effectively… yet.

The problem with the potential solutions discussed is that they would fundamentally alter the very nature of social media platforms, and the appeal of these platforms to the communities they serve.

Greater moderation, censorship of content, the sharing of social media technologies with intelligence agencies, intrusions of privacy, establishment based or institutionalised platforms or simply a reduced openness are all factors that would cause users to shift away from these platforms to other less restrictive ones. This is also all in the context of users changing platforms quickly when they go against the very nature of the type of community users want to be part of.

There is a delicate balance to be had between restricting terrorist propaganda on social media platforms and maintaining the openness of those platforms. In essence, this dilemma in the virtual world should not surprise us, since it reflects similar dilemmas that exist in the real world when countries or communities consider responses to terrorism. For example, increased stop and search, media censorship and increased surveillance of certain communities all act to change the very nature of those communities.

The difference with this modern dilemma in the virtual world is the size and complexity of the community, and that the responsibility for policing this virtual community appears to be being passed at large to private companies as opposed to state actors.

Amanda is one of forburyTECH’s legal experts, specialising in employment law.