GROW YOUR TECH STARTUP

WEF blog calls for an ‘International Cybercrime Coordination Authority’ to impose collective penalties on uncooperative nations

April 28, 2025

SHARE

facebook icon facebook icon

How long until online misinformation and disinformation are considered cybercrimes? perspective

The World Economic Forum (WEF) publishes a blog post calling for the creation of a global authority on cybercrime that would have extradition and enforcement powers over uncooperative nations.

Last week, the WEF published a blogpost entitled, “4 steps towards creating an international agency against cybercrime,” in which the authors argue for the creation of an “International Cybercrime Coordination Authority (ICCA)” to act as an intelligence sharing body between like-minded nations that also has to the power to “standardize cybercrime extradition laws” and to “impose collective penalties on nations” that refuse to cooperate.

“It is time to formalize these efforts through the creation of an International Cybercrime Coordination Authority (ICCA), a standing alliance of nations committed to coordinated enforcement, intelligence-sharing, legal harmonization and joint disruption of cybercriminal infrastructure”

WEF, “4 steps towards creating an international agency against cybercrime,” April 2025

Written by Anna Sarnek from Amazon Web Services and Ross Haleliuk of Venture in Security, the article is part of the WEF’s Center for Cybersecurity wherein the authors argue that the establishment of an ICCA would go far beyond the intelligence sharing capabilities of groups like the Five Eyes, the UN, and NATO and into the realm of collective punishment and extradition powers over nations.

While intelligence-sharing networks like Five Eyes and global institutions like the United Nations (UN) have historically played a role in international warfare, they are not sufficient to address the scale, complexity and speed of modern digital threats,” they write.

To the unelected globalists, every problem is a global one, and every problem requires a global solution bathed in bureaucracy.

“The ICCA would push to standardize cybercrime extradition laws, simplify digital evidence-sharing procedures and impose collective penalties (financial or diplomatic) on nations that refuse to cooperate or actively harbor offenders”

WEF, “4 steps towards creating an international agency against cybercrime,” April 2025

Using Russia as an exemplary boogeyman for providing a safe haven for cybercriminals, the WEF authors want the ICCA, once established, to “impose collective penalties” on the country for being uncooperative and actively harboring offenders.

In order to stop safe havens like Russia, we need to standardize extradition laws for cybercriminals and strengthen Interpol-led cybercrime enforcement,” they write.

However, before setting up an international body that would operate like a hybrid International Cybercriminal Court, a Global Police Force, and a Five Eyes rolled into one, a definition of what cybercrime actually is will be needed.

Building on top of the work by Partnership against Cybercrime (PAC), a globally recognized legal definition of cybercrime could include attacks on hospitals, emergency services, airports and public utilities, ransomware, digital extortion, financial fraud, phishing, and identity theft at scale, as well as operation of criminal infrastructure such as botnets and dark web marketplaces,” the authors write.

What the authors leave out is that the Partnership against Cybercrime Working Group, and the WEF in general for that matter, also considers online “disinformation” as a threat to democratic governments as detailed in an Insight Report from November, 2020.

“In addition to financial crimes, criminals use internet based infrastructure to uphold terrorism and drug trafficking, and spread disinformation to destabilize governments and democracies”

WEF, Partnership Against Cybercrime Insight Report, November 2020

Commenting on this point is investigative journalist, author, and contributing editor at Unlimited Hangout, who wrote “Ending Anonymity: Why the WEF’s Partnership Against Cybercrime Threatens the Future of Privacy” for The Last American Vagabond in July 2021:

Notably, the WEF Partnership against Cybercrime employs a very broad definition of what constitutes a ‘cybercriminal’ as they apply this label readily to those who post or host content deemed to be ‘disinformation’ that represents a threat to ‘democratic’ governments. The WEF’s interest in criminalizing and censoring online content has been made evident by its recent creation of a new Global Coalition for Digital Safety to facilitate the increased regulation of online speech by both the public and private sectors.”

“Shoring up trust will be a key goal of cybersecurity efforts over the next decade. The online spread of mis- and disinformation are now core cybersecurity concerns”
Cybersecurity will become less about protecting the confidentiality and availability of information and more about protecting its integrity and provenance”

WEF, Cybersecurity Futures 2030: New Foundations, December 2023

If that wasn’t enough, the WEF declared that online misinformation and disinformation were “core cybersecurity concerns” in a report published on December 5, 2023 entitled “Cybersecurity Futures 2030: New Foundations.”

According to the report, “Stable governments that follow through on long-term technology and cybersecurity strategies can become trusted ‘brands,’ gaining advantages in attracting talent, seizing leadership opportunities in multilateral standards-setting processes and countering disinformation campaigns.”

Fast forward a few years, and the desire to stamp out all narratives that don’t align with unelected globalists at the UN and WEF has only intensified.

For the second year in a row, the WEF has declared that the greatest global risk is misinformation and disinformation.

According to the WEF Global Risks 2025 report:

Polarization “continues to fan the flames of misinformation and disinformation, which, for the second year running, is the top-ranked short- to medium-term concern across all risk categories.

Efforts to combat this risk are coming up against a formidable opponent in Generative AI-created false or misleading content that can be produced and distributed at scale,” which was the same assessment given in the 2024 report.

The Global Initiative for Information Integrity on Climate Change responds to the commitment in the Global Digital Compact, adopted by United Nations Members States at the Summit of the Future in September 2024, which encourages UN entities, in collaboration with Governments and relevant stakeholders, to assess the impact of mis- and disinformation on the achievement of the Sustainable Development Goals”

G20 Leaders Summit, November 2024

Last year, the G20 launched the “Global Initiative for Information Integrity on Climate Change” as an attempt “to address disinformation campaigns that are delaying and derailing climate action.”

In the name of “information integrity” any narrative that could impede upon the UN’s Sustainable Development Goals (SDGs) is to be stamped out.

According to the G20 Leaders Summit 2024, “The Initiative responds to the commitment in the Global Digital Compact, adopted by United Nations Members States at the Summit of the Future in September 2024, which encourages UN entities, in collaboration with Governments and relevant stakeholders, to assess the impact of mis- and disinformation on the achievement of the Sustainable Development Goals.”

At the time UN Secretary General Antonio Guterres remarked that member states must work together to crush climate disinformation.

“We must fight the coordinated disinformation campaigns impeding global progress on climate change, ranging from outright denial to greenwashing to harassment of climate scientists. Through this Initiative, we will work with researchers and partners to strengthen action against climate disinformation”

Antonio Guterres, G20 Leaders Summit, November 2024

In 2023, the UN established a “voluntary UN Code of Conduct for Information Integrity on Digital Platforms” replete with policies aimed at silencing dissenting voices on digital platforms under the guise of mitigating “mis- and disinformation,” which is conveniently lumped-in with hate speech.

To give you an idea of the sheer size and scope to which the UN wishes to eradicate anything it deems “mis- and disinformation,” here are a few policy recommendations taken from the “Towards a United Nations Code of Conduct” section of the policy brief that calls-on not just member states, but private groups such as stakeholders (i.e. NGOs, businesses, academia, etc.), digital platforms, advertisers, and news media to do the UN’s bidding:

  • All stakeholders should refrain from using, supporting or amplifying disinformation and hate speech for any purpose.
  • All stakeholders should allocate resources to address and report on the origins, spread and impact of mis- and disinformation and hate speech, while respecting human rights norms and standards and further invest in fact-checking capabilities across countries and contexts.
  • All stakeholders should promote training and capacity-building to develop understanding of how mis- and disinformation and hate speech manifest and to strengthen prevention and mitigation strategies.
  • All stakeholders should take urgent and immediate measures to ensure the safe, secure, responsible, ethical and human rights-compliant use of artificial intelligence and address the implications of recent advances in this field for the spread of mis- and disinformation and hate speech.
  • Member States should ensure public access to accurate, transparent, and credibly sourced government information, particularly information that serves the public interest, including all aspects of the Sustainable Development Goals.
  • Member States should invest in and support independent research on the prevalence and impact of mis- and disinformation and hate speech across countries and languages, particularly in underserved contexts and in languages other than English, allowing civil society and academia to operate freely and safely.
  • Digital platforms and advertisers should ensure that advertisements are not placed next to online mis- or disinformation or hate speech, and that advertising containing disinformation is not promoted.
  • Digital platforms should ensure meaningful transparency regarding algorithms, data, content moderation and advertising.
  • Digital platforms should publish and publicize accessible policies on mis- and disinformation and hate speech, and report on the prevalence of coordinated disinformation on their services and the efficacy of policies to counter such operations.
  • Digital platforms should ensure the full participation of civil society in efforts to address mis- and disinformation and hate speech.
  • News media should ensure that all paid advertising and advertorial content is clearly marked as such and is free of mis- and disinformation and hate speech.

In its own words, the UN is primarily concerned with what it deems to be “misinformation” because the unelected globalist body is worried about information that may affect “UN mandate delivery and substantive priorities,” especially when it comes to criticism of its Sustainable Development Goals.

To bring it all back home, the WEF is promoting the creation of an International Cybercrime Coordination Authority, but how long will it be until online misinformation and disinformation are considered cybercrimes?

Don’t agree with climate change narratives? You’re a murderer for killing the planet. You’re committing ecocide.

Don’t agree with illegal migration? You’re a bigot and what you say is hate speech.

Don’t like what your representatives are doing and want to speak up? You’re undermining the authority of democratic governments.

Don’t want to be a part of the UN’s Agenda 2030 or the WEF’s great reset? You’re eroding trust in institutions.

Don’t want war? You’re supporting dictators, thugs, and terrorists.

With so-called hate speech constantly being lumped together with mis-and-disinformation regarding “digital safety,” how long before the two are indistinguishable?


Image Source: AI-generated by GROK

SHARE

facebook icon facebook icon

Sociable's Podcast

Trending