GROW YOUR TECH STARTUP

As big tech and govts differ over hate speech, should AI neutralize it?

July 11, 2019

SHARE

facebook icon facebook icon

As big tech platforms and governments have their own policies on hate speech, should society task AI with neutralizing it in the digital realm?

One of the big questions that arises in the hate speech debate is, “Where does the sanctity of free speech come into play?”

The answer differs depending on which country you live in, which platforms you use, and your own personal views on the matter.

These variables make the whole question of hate speech regulation controversial in-and-of itself because they deal with censorship.

Social media platforms are already using AI in their own way for monitoring hate speech, but the questions still remain:

  1. Should hate speech be regulated in the first place?
  2. Can we even agree on a single definition of what hate speech actually is?
  3. If so, should social media platforms continue to rely on AI to regulate hate speech?

The Sociable asked three experts about how AI might be able to make sense of all the policies, laws, and various definitions of hate speech, and whether or not AI is the best solution.

When Social Media Platforms Become Public Forums

Social media platforms have long ceased to be just places where you make friends. They now resemble public forums like city town halls where citizens can air their views for all concerned to hear. In turn, all concerned can oppose or support their views as part of a civil society.

However, these platforms have thus devolved into political tools that can wield influence over a populace. That’s immense power in the hands of a private entity, because the platforms can decide whom to silence and whom not to, depending on whether or not they agree with the ideology.

The line between big tech companies and politics has been completely blurred. Social media platforms are biased. Facebook employees “routinely suppressed conservative news,” according to a Gizmodo report from 2016.

More recently, Facebook agreed to hand over the identification data of French users suspected of hate speech on its platform to French courts. The French government can, naturally, be expected to influence how users express themselves on the platform.

Is AI a more neutral solution to the hate speech debate, or does AI just serve those that train it?

Why AI Alone Is Bad For Hate Speech Regulation

AI is getting pretty smart. Today, AI can read, write, and interpret almost at the level of humans, thanks to fast-paced research on Natural Language Processing.

Recently, OpenAI came up with an AI language model that learnt to write pages of prose, when primed with just one leading sentence, adapting to writing styles, inventing fictitious characters, and translating languages.

However, machines fail to comprehend some of the most basic of human interactions.

Ganes Kesari, Co-Founder and Head of Analytics at Gramener, tells The Sociable that in spite of such progress, it would be dangerous on our part to conclude that AI is ready for tackling difficult and pressing problems like hate speech.

Ganes Kesari

Ganes Kesari

“In the near future, AI-assisted humans are more practical. In the far future, we must design for human-assisted AI, including those that detect bias or hate speech”

“AI can understand language and interpret key messages. However, it is far away from understanding sarcasm or detecting bias. The problem gets tougher by a huge magnitude when you throw in shifting contexts,” he says.

Kesari says that we are holding AI to a standard that we humans haven’t been able to meet. After all, today’s AI models largely mimic humans, like a toddler. And we don’t expect wisdom from toddlers. Also, we don’t expect two humans to have the same viewpoint on a piece, or ‘dispassionately’ decide what is fair and center.

Thus, while AI is smart, it doesn’t seem to be clever. Will it become clever in the future? Kesari says we can design it to be.

“In the near future, AI-assisted humans are more practical. In the far future, we must design for human-assisted AI, including those that detect bias or hate speech,” he says.

Why AI Is Good For Hate Speech Regulation

Ray Walsh

Ray Walsh

The amount of data that requires sifting on social media platforms is more than huge, and this makes AI a tempting solution.

Ray Walsh, digital privacy expert at ProPrivacy.com, told The Sociable that logistically speaking, AI is the only way to do it, citing the recent collaboration between Facebook and the French government.

“AI can quickly and efficiently monitor huge amounts of data for potential infringements allowing human employees to make checks and take down content, or in the case of the newly-formed agreement with the French government, to allow the authorities to follow up on specific cases involving criminal hate speech,” said Walsh.

With millions of people communicating every second, it is near impossible to monitor and respond to reports of hate speech by a human person, or even a thousand-plus strong team to do so.

Read More: Dissecting Facebook’s ‘coordinated inauthentic behavior’ removal policy

Nenad Cuk

Nenad Cuk

That is why Nenad Cuk, the CEO of CroatiaTech, a future technology development company that works on AI, machine learning, and software development, feels that AI is the hope.

Talking to The Sociable, he said, “AI can handle thousands of inputs, and once trained correctly, it would be able to make a decision in a matter of nano seconds.”

“Any AI is as good as the backend system that feeds it information and guides its judgement. There is not one that is flawless currently, but I think we are almost there”

Facebook itself has in excess of 1.7 billion users worldwide. That many users producing content on a daily basis – the idea that human employees could monitor the ever-increasing repository of data is unrealistic.

How Social Media Platforms Use AI For Hate Speech Regulation

free speech, hate speech

As hate speech continues to target individuals and groups online, social media platforms have already begun to rely on AI to regulate it.

YouTube

In 2017, YouTube invested in powerful new machine learning technology to scale the efforts of their human moderators to take down videos and comments that violated their policies.

Read More: Who discerns what is fake news and why don’t we decide for ourselves?

YouTube has also removed videos first flagged through automated flagging, with and without views. According to their data, automated flagging helped them remove 75% of content without any views.

However, their ‘over-reliance’ on AI is being questioned.

Facebook

Facebook has been using AI to clean its platform for a while now. Mike Schroepfer, the company’s CTO has said that AI is the only way to prevent bad actors from taking advantage of the service.

Facebook engineers are using an AI called ‘self-supervised learning’ to enable the tech behind the site to adapt faster to spotting new forms of hate speech.

Twitter

In the last few years, Twitter has upgraded its AI capabilities. In 2017, it started using an AI algorithm to predict which tweets users would most like to see, accordingly renovating its user-feed organization. Now, Twitter uses AI algorithms to identify hate speech, or content from extremist groups.

According to the Financial Times, the platform took down 300,000 terrorist-related accounts with the help of AI-powered bots in the first half of 2017.

A combination of human and AI then. Will this combination bring that much needed amalgamation of intelligence that is void of human bias but armed with the human touch that can discern between an impingement and an opinion?

As Walsh says, “One can presume that the definitions used to seek out hate speech are strong and all-encompassing, but there is definitely the possibility that biases within the neural inputs used to create the algorithm may be protecting certain demographics more effectively than others.”

AI Needs Training, But Inherits Bias

AI is only ever as clever as the data that is used to train it. It has been proven that biases within the human inputs that develop AI algorithms can produce prejudices within the AI itself.

Cuk thinks there is the possibility of AI being able to discern the tone, message, and whether it is hate speech or criticism, provided it’s well trained.

“Any AI is as good as the backend system that feeds it information and guides its judgement. There is not one that is flawless currently, but I think we are almost there. In the next 3-4 years, I believe we will have a model that is 90% effective, leaving 10% to be done by human monitors,” says Cuk.

“The definitions used to seek hate speech are strong and all-encompassing, but biases within the neural inputs used to create the algorithm may be protecting certain demographics more effectively than others”

Diversity could help build a less biased AI, says Walsh. It is important for those algorithms to be carefully developed using a diverse development team, with the interests of as many at-risk demographics represented as possible.

This, according to Walsh, can ensure that AI is fair and equally able to deal with hate speech and prejudice against all members of society in an equally-able manner.

“Developing AI of this kind should not be a one-time job, but rather an ongoing process that seeks to monitor and analyze the current methodology to ensure that it is working as well as possible and that it is being improved where necessary,” says Walsh.

How Big Tech Companies ‘Define’ Hate Speech

With so many definitions of hate speech, we are courting trouble if we think we can have a single definition.

Platforms and organizations are trying to narrow it down to at least include any content that specifically targets a group, or incites violence towards it.

In other words, whatever can ‘harm’ others. John Stuart Mill’s Harm Principle is seen in most hate speech definitions of social media platforms and governments.

YouTube

According to Google-owned YouTube’s policy:

“We remove content promoting violence or hatred against individuals or groups based on any of attributes, including Age, Caste, Disability, Ethnicity, Gender Identity, Nationality, Race, Immigration Status, Religion, Sex/Gender, Sexual Orientation, Victims of a major violent event and their kin, and Veteran Status”

Recently, the video-sharing website added to its hate speech policy by specifically prohibiting videos that promote or glorify Nazi ideology, or content denying that well-documented violent events, like the Holocaust or if the shooting at Sandy Hook Elementary took place.

Facebook

Facebook states in its policy that it removes content in order to prevent specific harm. Facebook says it will:

“allow content that might otherwise violate our standards if we feel that it is newsworthy, significant, or important to the public interest. We do this only after weighing the public interest value of the content against the risk of real-world harm”

How is Facebook not considered a publisher?

Instagram

Facebook-owned Instagram removes content that:

“contains credible threats or hate speech, content that targets private individuals to degrade or shame them, personal information meant to blackmail or harass someone, and repeated unwanted messages.”

They also say, “It’s never OK to encourage violence or attack anyone based on their race, ethnicity, national origin, sex, gender, gender identity, sexual orientation, religious affiliation, disabilities, or diseases.”

Twitter

Calling its policy ‘Hateful Conduct’ Twitter says:

“You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories.”

Governments on Hate Speech

US 

Freedom of speech in the US is a fundamental right, which means hate speech is not regulated. The First Amendment, ratified December 15, 1791, provides (in relevant part) that “Congress shall make no law … abridging the freedom of speech, or of the press”. The Fourteenth Amendment, ratified on July 9, 1868, has been interpreted by the Supreme Court as extending this prohibition to laws enacted by the states.

UK

In England and Wales, expressions of hatred toward someone on account of that person’s color, race, disability, nationality (including citizenship), ethnic or national origin, religion, gender identity, or sexual orientation is forbidden.

Also, any threatening or abusive communication intended to harass, alarm, or distress someone is forbidden. The penalties for hate speech include fines, imprisonment, or both.

France

The hate speech laws in France are matters of both civil and criminal law that forbid communication intended to incite discrimination against, hatred of, or harm to, anyone belonging to an ethnicity, a nation, a race, a religion, a sex, a sexual orientation, or a gender identity, or because he or she has a handicap.

EU

European laws are not uniformly applied across the EU. The European Court of Human Rights (ECtHR) does not offer an accepted definition for ‘hate speech’ but offers parameters by which prosecutors can decide if the speech is entitled to the protection of freedom of speech.

Disclosure: This article includes a client of an Espacio portfolio company. 

SHARE

facebook icon facebook icon

Sociable's Podcast

Trending