Government and Policy

The EU wants to put a ‘tax on disinformation’: Fractured Reality report

If your content is deemed to be disinformation by the ministry of truth, your speech will cost you in programmable, digital euros: perspective

The European Union wants to put a tax on disinformation, so that the internet isn’t polluted with falsehood “super-spreaders,” according to the EU “Fractured Reality” report.

Published on April 16, the EU Joint Research Center report, “Fractured Reality: How democracy can win the global struggle over the information space” decries the harmful effects that so-called misinformation and disinformation have on narratives concerning climate, health, elections, and trust in institutions.

A the same time, the EU report never gives a single, solitary example of disinformation — the very boogeymen the authors want to tax.

“A tax on certain types of content (e.g. disinformation) could hold platforms accountable for a flow rate of harms analogous to taxing polluting effluent of industrial firms”

European Union, Fractured Reality, April 2026

Absent of any examples of misinformation or disinformation, the report shows how worried big institutions are in losing control over very specific narratives, particularly:

  • Mass vaccination (which they call public health)
  • Climate action (people are starting to question the solutions)
  • Election integrity (you’re not supposed to question methods or results)
  • Trust in institutions (an act of self-preservation)

“Misinformation contributed to vaccine hesitancy in over 2.3 million people in Canada during the vaccine rollout in 2021, estimating that immediate vaccination would have led to 28% fewer hospitalisations, and 2,800 fewer deaths, and that the costs of misinformation-induced hospitalisation were estimated at CAD$300 million”

European Union, Fractured Reality, April 2026

When it comes to “public health,” the authors’ primary concern is vaccination. Every bullet point is about vaccines, with one exception: “exposure to information that downplayed the severity of the pandemic.”

For “climate action,” they say misinformation and disinformation now includes any criticisms about “costs, feasibility, and fairness” — all of which are genuine concerns about policies like net-zero, yet they’re treated as misinformation.

For “Democracy and elections,” they don’t want you mentioning anything about voter fraud, whether real or evidentially perceived. If you believe in the opposition party, you’re most likely a victim of “misleading political claims.”

Then there’s “erosion of trust and social norms,” which is ironic because the authors never stopped to think about why they lost trust in the first place.

“Misinformation also exploits existing vulnerabilities—such as polarization and institutional distrust—and channels them into concrete outcomes, from changes in health behavior or reduced support for climate action to erosion of electoral trust or activating extremists to act on their beliefs”

European Union, Fractured Reality, April 2026

Unelected globalists are desperate to control narratives while virtue-signaling that they’re safeguarding “our democracy.”

Another tax they wish to apply is one on advertising.

According to the report, “To better internalize social harms, legislation can refine the advertising tax to account for the concentration of harmful or problematic content in a platform’s output—akin to taxing the toxicity of industrial effluent rather than its sheer volume. This modification aligns with well-established environmental principles and can be operationalized without engaging in speech regulation, although great care must be taken to circumscribe what is considered ‘harmful.’

The report goes on to say that anyone who says anything online should sign an affirmation that what they are saying is true, which is obviously great for nuanced topics.

Applied to digital speech, speakers could voluntarily signal content quality on attention markets (e.g., being free of disinformation) by staking resources (reputation, or monetary assets) to attest to their claims.”

So, if your content is free of disinformation, your speech is free of charge.

If your content is deemed to be disinformation by some type of ministry of truth, your speech will cost you in programmable, digital euros.

As the authors explain:

This should create an equilibrium, where truthful speakers signal credibility (by pledging a surety) while false claims are penalized (speakers lose their surety), thus ensuring that higher quality information has an advantage—in stark contrast to the current business models. The system would need to be calibrated to ensure that more financially well-off actors cannot easily absorb costs from putting out many wrong claims.”

So, who gets to decide which content is “free of disinformation?”

During the State of the European Union address on September 9, European Commission (EC) president Ursula von der Leyen urged the establishment of a “European Democracy Shield” while announcing the creation of a “European Center for Democratic Resilience” and a “Media Resilience Program” — all aimed at monitoring, detecting, and mitigating so-called disinformation.

For the authors of the EU Fractured Reality report, they see Wikipedia as a shining example of hope in “collective human intelligence” because in their words, “it has proven resilient to disinformation campaigns.”

Collective human intelligence can be harnessed to achieve great things through digital commons—a resource that is in under-supply if left to market forces alone. Wikipedia is a particularly illustrative example because it has proven resilient to disinformation campaigns, even concerning content surrounding highly polarized issues and contested events such as the Russia-Ukraine war

European Union, Fractured Reality, April 2026

The Fractured Reality authors also wish to see a return to the platform censorship we saw during the lockdowns, when governments colluded with big tech to flag content that didn’t align with the official COVID control narratives.

They want to bring back biased fact-checkers who claim to be independent but have their own agendas, as evidenced by the so-called Twitter Files.

“At the platform level, such interventions can target producers of misinformation, for example, by demonetizing accounts that disseminate falsehoods, downranking content flagged as false by independent fact-checkers, or suspending repeat offenders and super-spreaders of misinformation”

European Union, Fractured Reality, April 2026

And what do the authors think about people who value the bedrock of all democracy — free speech?

They call them “certain actors” as if they were some type of “frivolous fringe” to borrow an expression from World Economic Forum founder Klaus Schwab.

When it comes to those opposed to narrative control and censoring free speech under the guise of combatting disinformation that is harmful to “our democracy,” the report states:

Although certain actors often frame these measures as threats to freedom of expression, the public and experts view system-level measures favorably, opening a space for more regulatory intervention.”

The EU Joint Research Center is doubling down on censorship efforts in an attempt to control narratives.

They lament a lack of public trust in their institutions, for which they blame misinformation and disinformation.

But like the lifeforce-sucking vampires that they are, they fear looking in the mirror. They are incapable of self-reflection.

After everything they’ve said and done — the lockdowns, vaccine passports, internet passports, the shuttering of small businesses, the greatest transfer of wealth to the richest people and corporations in human history, the censorship, the consolidation of power, the centralization of authority, the manufactured massive migration and energy crisis, the endless wars, and the construction of a digital control grid — they still can’t figure out why people don’t trust them.

Better give them more control over all aspects of our daily lives.

If we don’t, the bad actors who question unelected globalist policies just might win!


Image Source: AI generated with ChatGPT

Tim Hinchliffe

The Sociable editor Tim Hinchliffe covers tech and society, with perspectives on public and private policies proposed by governments, unelected globalists, think tanks, big tech companies, defense departments, and intelligence agencies. Previously, Tim was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. These days, he is only responsible for articles he writes and publishes in his own name. tim@sociable.co

Recent Posts

Why enterprises keep getting AI wrong – and what it actually takes to get it right 

In the upper floors of corporate America, budgets are larger than ever, board presentations are…

9 minutes ago

You created the song. Now what? How Neural Frames is giving independent musicians a visual voice (Brains Byte Back Podcast)

In the latest episode of Brains Byte Back, host Erick Espinosa sits down with Dr.…

14 hours ago

How the launch of Prezent Vivo promises to change the communication landscape in life sciences permanently 

According to research from McKinsey, nearly a quarter of life sciences organizations had already deployed…

2 days ago

International think tank Horasis announces dates for its Asia Meeting

Horasis, the international think thank founded by Frank-Jürgen Richter, has announced that its Asia Meeting…

4 days ago

A new era of AI-native education is on the horizon 

While the use of AI in the classroom is always a hotly debated topic, the…

4 days ago

Unauthorized access to Anthropic’s Mythos model raises AI security concerns

Anthropic built a moat around its most powerful AI model yet. When tested, the defenses…

6 days ago