Social Media

DARPA wants to model how ‘disinformation’ flows from fringe to mainstream platforms

By gaining a deeper understanding of information pathways, DARPA-funded research could become a powerful tool for the Disinformation Governance Board, aka the ‘Ministry of Truth’: perspective

DARPA is looking to automate its understanding of how information flows from fringe to mainstream platforms with a new AI research opportunity.

On May 4, 2022, the Defense Advanced Research Projects Agency (DARPA) issued an AI research opportunity for Modeling Influence Pathways (MIP), which “seeks to connect information flows into pathways used to disseminate and amplify mis-,dis-, and manipulated information.”

“MIPs seeks to connect information flows into pathways used to disseminate and amplify mis-,
dis-, and manipulated information” — DARPA, MIP

Rather than attempting to come up with analytics for detecting misinformation, disinformation, or manipulated information, MIP is focused on modeling the pathways that the information flows through.

These information pathways include:

  • Social media
  • Memes
  • Blogs
  • Videos
  • Corporate media
  • Human subjects
  • And more

According to DARPA, “The discovery of pathways and patterns will move our understanding of information operations from today’s current focus on individual users, messages, memes, or campaigns to a higher level, structural and temporal understanding of how these operations unfold.”

In modeling the flow of information, the Pentagon’s research funding arm could potentially give the Department of Homeland Security (DHS) a powerful tool for its incoming “Disinformation Governance Board,” which critics are already calling the “Ministry of Truth.”

“MIPs will require data that may be sourced from human subjects” — DARPA, MIP

“Identifying patterns that emerge among [influence] pathways, such as the propagation from certain niche platforms to more mainstream platforms, is essential to understanding influence operations” — DARPA, MIP

The DHS already has a DARPA-like agency under its Science & Technology Directorate (S&T) called the Homeland Security Advanced Research Projects Agency (HSARPA), which has also coordinated with DARPA.

For example, in August, 2020, DHS’ S&T transferred its explosive threats-detecting technology over to DARPA to “help keep our warfighters and our nation safe from weapons of mass destruction (WMD) threats.”

Another DARPA/HSARPA connection is former HSARPA Director and DARPA program manager Paul Benda, who “was the de facto head of R&D for the DHS, the senior executive in charge of the design and installation of all security systems on the Pentagon Reservation, and a program manager at DARPA,” according to his bio.

“Some data sources that will enable MIPs’ objectives may contain data that identifies one or more living individuals” — DARPA, MIP

If the incoming DHS Disinformation Governance Board ever got its hands on DARPA’s MIP tools, it would be able to collect data not just from websites, memes, or social media platforms, but from living human beings.

According to the DARPA announcement, “MIPs will require data that may be sourced from human subjects.

“Human subject means a living individual about whom an investigator (whether professional or student) conducting research obtains data through intervention or interaction with an individual, or obtains data that is identifiable private information.

“Even if there is no intent to collect human subject data, some data sources that will enable MIPs’ objectives may contain data that identifies one or more living individuals.”

With MIP, the Pentagon is adding yet another tool to its information warfare kit that has been decades in the making.

“MIPs complements maturing capabilities for identification of misinformation, disinformation, and manipulated information with better mapping and understanding of the pathways used to disseminate and amplify that information” — DARPA, MIP

When combined with other DARPA research programs, MIP follows a pattern of the Department of Defense looking to AI to automate the detection and moderation of online influence campaigns.

For MIP, DARPA says the research “will complement various DARPA efforts,” in particular, “by providing a higher (ecosystem) level of analysis” than the following programs:

The main point of contact for MIP is Dr. Brian Kettler, who is also the program manager for both INCAS and SocialSim.

Dr. Kettler came to DARPA from Lockheed Martin in March, 2019, where he was a Lockheed Martin Fellow and chief scientist of the Informatics Lab in the Advanced Technology Labs.

INCAS will exploit primarily publicly-available data sources including multilingual, multi-platform social media (e.g. blogs, tweets, messaging), online news sources, and online reference data sources” — DARPA, INCAS

With Dr. Kettler at the helm, the INCAS research program is aimed at detecting, categorizing, and tracking online geopolitical influence campaigns, including those that fly under the radar of most analysts.

To achieve its goals, “INCAS will exploit primarily publicly-available data sources including multilingual, multi-platform social media (e.g. blogs, tweets, messaging), online news sources, and online reference data sources.”

Dr. Kettler’s other program, SocialSim, was launched to “focus specifically on information spread and evolution.”

The idea behind SocialSim was that “a simulation of the spread and evolution of online information, if accurate and at-scale, could enable a deeper and more quantitative understanding of adversaries’ use of the global information environment than is currently possible using existing approaches.”

“DARPA seeks to develop tools to help identify misinformation or deception campaigns and counter them with truthful information, reducing adversaries’ ability to manipulate events” — DARPA, Social Media in Strategic Communication (SMISC)

Going back over a decade, DARPA launched the Social Media in Strategic Communication (SMISC) program in 2011 “to help identify misinformation or deception campaigns and counter them with truthful information” on social media.

More recently, DARPA announced the Civil Sanctuary project, which looks to create multilingual AI moderators that “will exceed current content moderation capabilities by expanding the moderation paradigm from detection/deletion to proactive, cooperative engagement.”

Through its research programs, DARPA continues to develop powerful surveillance tools for modeling, measuring, and analyzing the spread of information.

“We’re flagging problematic posts for Facebook that spread disinformation” — White House Press Secretary Jen Psaki, July, 2021

Whereas the US Congress shall make no law abridging the freedom of speech, the current administration has been coordinating with social media companies in flagging problematic posts that spread alleged disinformation.

DARPA’s MIP is not meant to determine whether a piece of content is disinformation or not — that responsibility is beyond their scope — but DARPA is looking to trace the flow of disinformation after it has already been identified.

In the hands of tyrannical governments, DARPA’s disinformation modeling tools would aid any authoritarian censorship effort.

“The discovery of pathways and patterns will move our understanding of information operations from today’s current focus on individual users, messages, memes, or campaigns to a higher level, structural and temporal understanding of how these operations unfold” — DARPA, MIP

For MIP, DARPA is looking for submissions of innovative basic or applied research concepts in the technical domains of:

  • Social media analysis
  • Classification
  • Pattern discovery
  • Computational social science modeling

The following features are beyond the scope of MIP:

  • Mapping social networks of individual users
  • Attributing actions to specific actors or entities
  • Analytics for detecting misinformation, disinformation, manipulated information, or influence campaigns
  • Advancing text/multimedia or social network analysis techniques
  • Advancing bot detection technologies

MIP will explore AI technologies for:

  1. Connecting various identified influence messaging flows across platforms
  2. Learning, mapping, and modeling which pathways are used by what types of information
  3. Discovering patterns that characterize these pathways

“Pathways,” according to the MIP announcement, “might be characterized as a series of edges between nodes in a graph where the nodes represent online or offline platforms or communities.

“Platforms include social media sites, other websites, and broadcast media channels. Communities include online explicit or implicit communities such as discussion groups or ‘offline,’ geographically-rooted communities consuming information from platforms. Nodes could also potentially represent individual persons (e.g., witting or unwitting key influencers).”

In gaining a deeper understanding on how, why, and where information flows, DARPA-funded research could be weaponized to spread disinformation, aid geopolitical influence campaigns, and censor dissent.

Tim Hinchliffe

The Sociable editor Tim Hinchliffe covers tech and society, with perspectives on public and private policies proposed by governments, unelected globalists, think tanks, big tech companies, defense departments, and intelligence agencies. Previously, Tim was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. These days, he is only responsible for articles he writes and publishes in his own name. tim@sociable.co

View Comments

Recent Posts

Barcelona’s Tech Ecosystem: Gateway to Europe

Article by Ian Rankin, Chief Commercial Officer at Sim Local As its ecosystem grows, the…

10 hours ago

Uruguay passes law regulating crypto, could set precedent for rest of Latin America

While several Latin American countries have enacted crypto regulations — including some with volatile economic…

1 day ago

CBDC could be used for state surveillance, includes wealth of personal data & behavioral patterns: IMF

Programmable Central Bank Digital Currencies (CBDCs) could be used for state surveillance while posing risks…

2 days ago

Understanding the Cultural Impact of Nippon’s Acquisition of U.S. Steel

Article by Shinichiro (SHIN) Nakamura, President of one to ONE Holdings Nippon Steel’s proposed $15…

3 days ago

The Great Revolt will be the end of the AI saga

Joe Rogan is ten years older than me. So, when I say that I totally…

3 days ago

Why the US healthcare system is in urgent need of digital health solutions 

The US has access to some of the most advanced healthcare treatments and innovations. In…

6 days ago