GROW YOUR TECH STARTUP

ESG can evaluate platforms for disinformation, hate speech & abuse material policies: WEF

June 12, 2024

SHARE

facebook icon facebook icon

Unelected globalists are associating disinformation with human rights abuses to empower themselves & silence dissent: perspective

The World Economic Forum (WEF) says that ESG metrics can prove valuable for evaluating platforms on their handling of disinformation, hate speech, and abuse material, in a new report.

Published on June 6, 2024, the WEF white paper, “Making a Difference: How to Measure Digital Safety Effectively to Reduce Risks Online,” says that “In an increasingly interconnected world, it is essential to measure digital safety in order to understand risks, allocate resources and demonstrate compliance with regulations.”

If measuring digital safety is considered to be essential, what then are the actual online harms that would necessitate measuring digital safety?

The latest white paper only gives three examples: disinformation, hate speech, and abuse material — as if they were all equal under the banner of online harm.

“ESG metrics present another valuable perspective for evaluating online safety”

How to Measure Digital Safety Effectively to Reduce Risks Online, WEF, June 2024

One method for evaluating online safety described in the latest WEF white paper is to leverage ESG scoring, which is basically a social credit for companies to make them fall in line with unelected globalist ideologies, even when these ESG policies are detrimental to their bottom line.

Within ESG investing, companies are assessed based on their environmental impact, social responsibility and corporate governance practices,” the report reads.

Similarly, online platforms could be evaluated based on their efforts to promote a safe and inclusive online environment, and the transparency of content moderation policies.

Online platforms can also be evaluated based on their processes, tools and rules designed to promote the ‘safe use’ of their services in a manner that mitigates harm to vulnerable non-user groups.”

And who will be evaluating online platforms in this Orwellian dystopia? Why, the unelected globalists themselves, of course!

No need to put it to a vote. The people can’t be trusted to decide their own fate for themselves.

Best to leave these decisions and all the power to bureaucrats that have our best interests at heart for the greater, collectivist good.

“An increase in the speed of content removals may reflect proactive moderation efforts, but it could also hint at overzealous censorship that stifles free expression”

How to Measure Digital Safety Effectively to Reduce Risks Online, WEF, June 2024

The WEF considers disinformation, hate speech, and abuse material as all being online harms that need to be measured and rectified.

But why do they lump everything together under this vague, blanket term of digital safety?

It is so that unelected globalist NGOs like the WEF can have more power and influence over government regulators concerning what type of information people are allowed to access through their service providers.

According to the report, “Digital safety metrics reinforce accountability, empowering NGOs and regulators to oversee service providers effectively.

They also serve as benchmarks for compliance monitoring, enhancing user trust in platforms, provided they are balanced with privacy considerations and take into account differentiation among services.”

For the unelected globalist bureaucrats, measuring digital safety is about empowering themselves and forcing people into compliance with unelected globalist ideologies (with the help of regulators), all while balancing privacy considerations that are antithetical to everything they’re trying to achieve with the great reset and the fourth industrial revolution.

WEF founder Klaus Schwab has stated on numerous occasions that the so-called fourth industrial revolution will lead to the fusion of our physical, biological, and digital identities.

Schwab openly talks about a future where we will decode people’s brain activity to know how they’re feeling and what they are thinking and that people’s digital avatars will live on after death and their brains will be replicated using artificial intelligence.

How’s that for balancing privacy considerations in the digital world?

“Digital safety decisions must be rooted in international human rights frameworks”

Typology of Online Harms, WEF, August 2023

While the latest WEF white paper only lists disinformation, hate speech, and abuse material, it builds upon an August 2023 insight report entitled “Toolkit for Digital Safety Design Interventions and Innovations: Typology of Online Harms,” which expands the scope of what constitutes online harm into various categories:

  • Threats to personal and community safety
  • Harm to health and well-being
  • Hate and discrimination
  • Violation of dignity
  • Invasion of privacy
  • Deception and manipulation

Many of the harms listed in last year’s report have to do with heinous acts against people of all ages and identities, but there, too in that list of online harms, the WEF highlights misinformation and disinformation without giving a single, solitary example of either one.

With misinformation and disinformation, the typology report states that “Both can be used to manipulate public opinion, interfere with democratic processes such as elections or cause harm to individuals, particularly when it involves misleading health information.”

In the same report, the unelected globalists admit that it’s almost impossible “to define or categorize common types of harm.”

The authors say that “there are regional differences in how specific harms are defined in different jurisdictions and that there is no international consensus on how to define or categorize common types of harm.

Considering the contextual nature of online harm, the typology does not aim to offer precise definitions that are universally applicable in all contexts.”

By not offering precise definitions, they are deliberately making “online harm” a vague concept that can be left wide open to just about any interpretation, which makes quashing dissent and obfuscating the truth even easier because these “online harms,” in their eyes, must be seen as human rights abuses.

By framing online harms through a human rights lens, this typology emphasizes the impacts on individual users and aims to provide a broad categorization of harms to support global policy development

Typology of Online Harms, WEF, August 2023

Once again, the authors are deliberately putting misinformation, disinformation, and so-called hate speech in the same category as abuse, harassment, doxing, and criminal acts of violence under this “broad categorization of harms.”

That way, they can treat anyone they deem as a threat for speaking truth to power in the same manner as they would for people who commit the most egregious crimes known to humanity.

The title of the latest white paper suggests that it’s all about measuring digital safety, but the title can be misleading.

It’s like what lawmakers do when they introduce bills like the Inflation Reduction Act, which had nothing to do with reducing inflation and everything to do with advancing the green agenda, decarbonization, and net-zero policies.

Similarly, the WEF’s latest white paper may have little or nothing to do with reducing risks online, as the title suggests.

But it does have a lot to do with making sure that misinformation, disinformation, and hate speech are associated with human rights abuses and other acts of real criminality.

In doing so, the ESG proponents can swoop in and consolidate more power for their public-private partnerships — the fusion of corporation and state.


Image by redgreystock on Freepik

SHARE

facebook icon facebook icon

Sociable's Podcast

Trending