Categories: Technology

AI can now moderate social media and protect brands from trolls and hate speech

Social media moderating for big brands can be pretty hard. The amount of content that can be posted every day is difficult to keep up with, so it’s only natural that some comments will slip through the cracks.

And that can be bad. As much good as social media has done for us; allowing us to keep in touch with friends around the world or share our experiences with millions of people, there’s always a couple of people who like to ruin it for the rest of us. Germany has already started threatening social media companies with fines for failing to deal with the mounting problem.

I don’t know what it is about the internet that seems to compel people to spew absolute vitriol towards others, it could be the feeling of safety hiding behind a screen, it could just as easily be boredom. The fact of the matter is it happens. And increasingly, those tasked to deal with the mess are online moderators.

But, they can’t be at the computer all the time. What Smart Moderation realised was that having a 24/7 human social media moderator is costly and time-consuming, and honestly, not many brands could afford such luxuries.

Ciler Ay Tek
Co-founder & CEO at Smart Moderation

Founded in 2014 by Çiler Ay Tek and Mete Aktaş, Smart Moderation uses artificial intelligence to protect brands social media pages. Facebook, Instagram, and YouTube are all part of its core networks.

There are already a number of profanity filters in use on message boards all over the internet, but Smart Moderation is different in that the AI is more than a simple profanity filter. Some profanity filters are even too vigilant and would end up censoring the ‘ass’ in ‘assessment’. This can be difficult on the eyes for users and makes it difficult to have a proper conversation.

Smart Moderation analyses text the same way a human would. For example, it can detect the difference between ‘F*** you!’ and ‘That’s f***ing awesome!’ With profanity filters, the text would look just as I’ve typed it, however, with Smart Moderation’s AI, only the first example would be removed or hidden, whereas the other would be seen exactly how it was supposed to.

Its main objective is to assist with removing hate-speech, something that is one of the biggest issues facing social media users worldwide today. It operates using Facebook’s community standards as a baseline but this can be customised on a user to user basis. Should the client want to, they can even teach the AI themselves by marking comments as ‘Inappropriate’ or ‘OK’.

As time goes on, the AI will learn more about the habits of your online community and only become more effective. By working 24/7 and in real time, it will pick up any comments that violate terms set by the users.

Small companies can use the service for free with up to 5000 followers. Brands with larger audiences have the option to try for free before signing up to the premium plan.

With Smart Moderation, the team behind it hope to make the internet a safer place for users. You shouldn’t have to worry about being abused for posting innocent comments on Facebook, and you don’t deserve to be abused for having a differing opinion.

What you do deserve, though, is to be able to browse your favourite pages, and interact with fellow users in peace.

Nicolas Waddell

Nicolas has spent time in Asia, Canada and Colombia watching people and wondering just what the heck they'd do without their phones; but only because he wonders the same of himself.

Recent Posts

What sitting all day is quietly doing to your body and why you don’t even realize it (Brains Byte Back Podcast)

Adults today spend over nine hours a day sitting, according to national health data. On…

3 days ago

Kryterion and Automattic partner to create a gold standard in WordPress developer credentials

The web has a WordPress problem – not the platform itself, but the people who…

4 days ago

Consciousness computing tech exists, ‘whoever governs identity governs society’: World Forum

Neural rights was a hot topic during a session called "Approaching Singularity: Our Brains Interfacing…

5 days ago

Decision Points: The “Tiger” Methodology for Decisive Action

At some point in the last 10 years, I started viewing Colonel John Boyd as…

1 week ago

Architecting Zero-Click AI Eval Pipelines

When I started designing an AI Evaluation pipeline/framework at my organization, I had no idea…

1 week ago