" />
Social Media

Murky methods: social media’s initial anti-terrorism efforts

Murky methods: social media’s initial anti-terrorism efforts

As governmental pressure mounts on social media companies to rid their platforms of terrorist content, initial AI methods are producing results; though we know not how.

YouTube recently pointed proudly to an 83% success rate in their new approach to removing “extremist” content, a method developed by training an AI engine which flags content to be reviewed by a team of human monitors.

The results sound strong, but between their AI approach and their desire to retain proprietary knowledge the definition of “extremist” remains within.

The statistics are timely, only recently the EU threatened tech companies with legal repercussions should their platforms fail to remove hate speech quickly.

YouTube has been part of a broad coalition of tech companies which together constitute the Tech Against Terrorism project. The collective endeavour is aimed at a self-regulation approach to terrorist content — in essence the kids are being left at home by themselves for the first time – no babysitter – and intent on proving they’re responsible enough for the situation to become the norm.

The alternative that social media platforms are trying to avoid is governments stepping in with heavy-handed or overly-generalised legislation which could unduly hinder innovation in more legitimate directions.

However, self-regulation also means the public doesn’t get to find out the rules by which their approach functions.

The companies involved have all released their own success stories recently. Twitter offered up some big numbers, having removed nearly a million accounts since its efforts began in the summer of 2015 and having most recently seen a 20% drop in the number of accounts removed in the first half of 2017 compared to previous six-month periods. Twitter attributed the drop in account removal to the efficacy of their approach, having also seen an 80% drop in accounts reported by governments.

That being said Twitter also boasted that 75% of the accounts were removed before a single post was made on the accounts. High efficacy is undoubtedly preferable to low efficacy, though the details of their methods remain unclear, which in turn makes it tricky to ascertain the level to which accounts or content are being blocked which are legitimate expressions of free speech.

Of the big tech social media platforms Facebook has been the most forthcoming with the details behind its methods. The social media giant has developed “text-based signals” from previously flagged content to train an AI engine in image matching and language understanding. Facebook is also in the process of hiring 3,000 new staff to monitor the posts flagged by the AI engine.

So, glass half-full? Perhaps, but let’s at least agree the glass isn’t yet full. It is at least encouraging that the promise by social media companies to deal with terrorist content on their platforms isn’t empty. And governmental legislation on the matter is, for now, the road not taken, so it’s difficult to fathom whether the inevitable clumsiness of any legislation would be worth the corollary transparency.

The methods of social media platforms’ anti-terrorism efforts may unravel if reports start emerging that legitimate content is being removed; at which point both the glass and the promise might start looking half-empty.

Click to add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Social Media

Ben Allen is a traveler, a writer and a Brit. He worked in the London startup world for a while but really prefers commenting on it than working in it. He has huge faith in the tech industry and enjoys talking and writing about the social issues inherent in its development.

More in Social Media

zuckerberg president

Zuckerberg is running for president, even if he never runs: KU interview analysis

Ben AllenNovember 13, 2017
Big Brother looming over Chinese citizens

How China’s social rating system can affect you

Daniel SanchezNovember 9, 2017
social media unconscious

How social media affects our collective unconscious

Tim HinchliffeNovember 1, 2017
facebook change news feed

Is Facebook planning to change the News Feed as we know it?

Markus SkagbrantOctober 24, 2017
social media immigrants

US can now collect social media data on immigrants, citizens, and travelers

Tim HinchliffeOctober 23, 2017
instagram post trending

Instagram hashtags: how to get your post trending

Kavinesh ArumugamOctober 18, 2017
HashtagWithMyBae HashtagBestSmoothieEver

Why ratings and reviews are for products, not people

Daniel SanchezOctober 17, 2017
instagram stories polling

How businesses can leverage Instagram Stories’ polling feature

Kavinesh ArumugamOctober 10, 2017
Privacy in the age of performance crime fighting

Why privacy is becoming a collateral of crime fighting

Daniel SanchezOctober 6, 2017