Social Media

Facebook says it doesn’t benefit from hate, but its algorithms tell a different story: op-ed

‘Zuckerberg is hiding the fact that he knows that hate, lies & divisiveness are good for business’ — Dr. Hany Farid

At a time when huge companies are suspending their advertising campaigns on Facebook, the social media giant claims it doesn’t benefit from hate, yet its algorithms and business model tell a different story.

Nick Clegg

“I want to be unambiguous: Facebook does not profit from hate” — Nick Clegg

Last week, Facebook VP of Global Affairs and Communication Nick Clegg declared that Facebook did not benefit or profit from hate in a story that ran on AdAge and on the Facebook blog.

“I want to be unambiguous: Facebook does not profit from hate,” wrote Clegg.

“Billions of people use Facebook and Instagram because they have good experiences — they don’t want to see hateful content, our advertisers don’t want to see it, and we don’t want to see it. There is no incentive for us to do anything but remove it,” he added.

However, Facebook can still benefit from hate even when it removes hate speech because its algorithms fuel divisiveness, and hate is good for Facebook’s business model, according to expert witness testimony during a House Committee on Energy and Commerce hearing last month.

Dr. Hany Farid

“Mark Zuckerberg is hiding the fact that he knows that hate, lies, and divisiveness are good for business” — Dr. Hany Farid

UC Berkeley professor and expert in digital forensics Dr. Hany Farid testified that Facebook has a toxic business model that puts profit over the good of society and that its algorithms have been trained to encourage divisiveness and the amplification of misinformation.

“Mark Zuckerberg is hiding the fact that he knows that hate, lies, and divisiveness are good for business,” Farid testified.

“They didn’t set out to fuel misinformation and hate and divisiveness, but that’s what the algorithms learned.

“The core poison here is the business model” — Dr. Hany Farid

“Algorithms have learned that the hateful, the divisive, the conspiratorial, the outrageous, and the novel keep us on the platforms longer, and since that is the driving factor for profit, that’s what the algorithms do.”

“The core poison here is the business model. The business model is that when you keep people on the platform, you profit more, and that is fundamentally at odds with our societal and democratic goals,” he added.

Tristan Harris

“Facebook and the other companies will often claim that they’re holding up a neutral mirror to society — but they’re not!  They’re holding up a fun house mirror” — Tristan Harris

Another point that Facebook’s Clegg penned in his company’s defense was that “Platforms like Facebook hold up a mirror to society.”

However, that mirror is a distorted one, according to Center for Humane Technology President and ex-Google ethicist Tristan Harris.

“Facebook and the other companies will often claim that they’re holding up a neutral mirror to society […] but they’re not!” said Harris in presentation last month.

“They’re holding up a fun house mirror, a distorted mirror that tends to amplify the things that worked for manipulating human vulnerabilities and preying on the deep, soft underbelly of our hatred, our fear, our anxiety, our emotions instead of actually trying to help us,” he added.

“Platforms like Facebook hold up a mirror to society” — Nick Clegg

Facebook’s algorithms were designed to keep users on the platform for as long as they could in order to rake in more ad revenue, and they manipulate human vulnerabilities to keep people’s eyes glued to the page.

We don’t spend our days looking for car crashes, but when we pass by one, we can’t help but slow down and look.

The same thing happens when we are bombarded by outrageous social media posts — we can’t help but look at the flashy information that tickles our senses like a virtual car crash.

“Tech companies manipulate our sense of identity, self-worth, relationships, beliefs, actions, attention, memory, physiology and even habit-formation processes, without proper responsibility” — Tristan Harris

“Tech companies manipulate our sense of identity, self-worth, relationships, beliefs, actions, attention, memory, physiology and even habit-formation processes, without proper responsibility,” Harris testified before Congress in January.

“Technology has directly led to the many failures and problems that we are all seeing: fake news, addiction, polarization, social isolation, declining teen mental health, conspiracy thinking, erosion of trust, breakdown of truth,” he added.

Facebook’s Clegg concluded in his argument, “We may never be able to prevent hate from appearing on Facebook entirely, but we are getting better at stopping it all the time” — and I have to agree with him about never being able to prevent hate — people will always be people.

But Clegg’s focus on preventing and removing hate speech does nothing to address the fundamental issue — that Facebook’s algorithms and entire business model are drivers of hate and divisiveness.

Clegg is looking at symptom, not a cause.

Facebook knows that its algorithms “exploit the human brain’s attraction to divisiveness,” according to the Wall Street Journal, and yet “Facebook executives shut down efforts to make the site less divisive.”

Brandi Collins-Dexter

“When executives at Facebook were alerted that their algorithms were dividing people in dangerous ways, they rushed to kill any efforts to create a healthy dialogue on the platform” — Brandi Collins-Dexter

As Color Of Change Senior Campaign Director Brandi Collins-Dexter testified last month in the same hearing as Farid:

“When executives at Facebook were alerted that their algorithms were dividing people in dangerous ways, they rushed to kill any efforts to create a healthy dialogue on the platform.”

Strong evidence points to Facebook directly benefiting from hate, regardless of whether or not “hate speech” is a factor.

Tim Hinchliffe

The Sociable editor Tim Hinchliffe covers tech and society, with perspectives on public and private policies proposed by governments, unelected globalists, think tanks, big tech companies, defense departments, and intelligence agencies. Previously, Tim was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. These days, he is only responsible for articles he writes and publishes in his own name. tim@sociable.co

View Comments

Recent Posts

Web 3.0 and blockchain technology become force for good with MOU between Crescite and University of Notre Dame

The University of Notre Dame, the private research university based in Indiana, is demonstrating that…

19 hours ago

‘Internet of Bodies may lead to Internet of Brains’ by 2050: RAND

Transhumanism may lead to super-human capabilities for some & mind control for others: perspective The…

3 days ago

15 AI Tools Helping SME’s and Startups to Thrive in 2024

Article authored by Kevin Siskar, CEO of Finta John McCarthy first coined the term "artificial…

3 days ago

AnnotatED 2024 – A conference for educators & administrators interested in social annotation

The expanding digital annotation space is getting a new thought leadership event. Hypothesis, a leading…

4 days ago

Your metaverse identity will be central to daily life: WEF report

Virtual voodoo dolls, autonomous avatars & digital doppelgangers will be your behavior-profiling passports in the…

1 week ago

Exploring Machine Learning Text Classification with Product-Centric Approach

We’re going to pretend we have an actual product we need to improve. We will…

2 weeks ago