Government and Policy

US spy community wants to identify threats posed by tech behind ChatGPT, large language models

The US Intelligence Advanced Research Projects Activity (IARPA) issues a request for information (RFI) to identify potential threats and vulnerabilities that large language models (LLMs) may pose.

“IARPA is seeking information on established characterizations of vulnerabilities and threats that could impact the safe use of large language models (LLMs) by intelligence analysts”

While not an official research program yet, IARPA’s “Characterizing Large Language Model Biases, Threats and Vulnerabilities” RFI aims “to elicit frameworks to categorize and characterize vulnerabilities and threats associated with LLM technologies, specifically in the context of their potential use in intelligence analysis.”

Many vulnerabilities and potential threats are already known.

For example, you can ask ChatGPT to summarize or make inferences on just about any given topic, and it can comb its database to give you an explanation that sounds convincing.

However, those explanations can also be completely false.

As OpenAI describes it, “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.”

But the risks posed by LLMs go well beyond nonsensical explanations, and the research funding arm for US spy agencies is looking to identify threats and vulnerabilities that might not have been fully covered in the OWASP Foundation’s recently published “Top 10 for LLM.”

“Has your organization identified specific LLM threats and vulnerabilities that are not well characterized by prior taxonomies (c.f., “OWASP Top 10 for LLM”)? If so, please provide specific descriptions of each such threat and/or vulnerability and its impacts”

Last week, UC Berkeley professor Dr. Stuart Russell warned the Senate Judiciary Committee about a few of the risks in the OWASP top 10 list, including Sensitive Information Disclosure, Overreliance, and Model Theft.

For example, Russell mentioned that you could potentially be giving up sensitive information just by the types of questions you were asking; and then the chatbot could potentially spit back sensitive or proprietary information belonging to a competitor.

If you’re in a company […] and you want the system to help you with some internal operation, you’re going to be divulging company proprietary information to the chatbot to get it to give you the answers you want,” Russell testified.

If that information is then available to your competitors by simply asking ChatGPT what’s going on over in that company, this would be terrible,” he added.

If we take what Russell said about divulging company information and apply that to divulging US intelligence information, then we can start to get a better understanding of why IARPA is putting out its current RFI.

But there could also be potential threats and vulnerabilities that are as-of-yet not known.

As former US Secretary of Defense Donald Rumsfeld famously quipped, “There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know.”

So, for the current RFI, IARPA is asking organizations to answer the following questions:

  • Has your organization identified specific LLM threats and vulnerabilities that are not well characterized by prior taxonomies (c.f., “OWASP Top 10 for LLM”)? If so, please provide specific descriptions of each such threat and/or vulnerability and its impacts.
  • Does your organization have a framework for classifying and understanding the range of LLM threats and/or vulnerabilities? If so, please describe this framework, and briefly articulate for each threat and/or vulnerability and its risks.
  • Does your organization have any novel methods to detect or mitigate threats to users posed by LLM vulnerabilities?
  • Does your organization have novel methods to quantify confidence in LLM outputs?

The primary point of contact for the RFI is Dr. Timothy McKinnon, who also manages two other IARPA research programs: HIATUS and BETTER.

  • HIATUS [Human Interpretable Attribution of Text Using Underlying Structure]: seeks to develop novel human-useable AI systems for attributing authorship and protecting author privacy through identification and leveraging of explainable linguistic fingerprints.
  • BETTER [Better Extraction from Text Towards Enhanced Retrieval]: aims to develop a capability to provide personalized information extraction from text to an individual analyst across multiple languages and topics. 

Last year, IARPA announced it was putting together its Rapid Explanation, Analysis and Sourcing ONline (REASON) program “to develop novel systems that automatically generate comments enabling intelligence analysts to substantially improve the evidence and reasoning in their analytic reports.”

Additionally, “REASON is not designed to replace analysts, write complete reports, or to increase their workload. The technology will work within the analyst’s current workflow. 

“It will function in the same manner as an automated grammar checker but with a focus on evidence and reasoning.”

So, in December, IARPA wanted to leverage generative AI to help analysts write intelligence reports, and now in August, the US spy agencies’ research funding arm is looking to see what risks large language models may pose.


Image by kjpargeter on Freepik

Tim Hinchliffe

The Sociable editor Tim Hinchliffe covers tech and society, with perspectives on public and private policies proposed by governments, unelected globalists, think tanks, big tech companies, defense departments, and intelligence agencies. Previously, Tim was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. These days, he is only responsible for articles he writes and publishes in his own name. tim@sociable.co

Recent Posts

GAN, Tec de Monterrey partnership highlights cross-border startup ecosystem building in Latin America amid trade dispute

Despite recent tensions between the United States government and Latin American countries over migration and…

3 days ago

This founder started out with US $5K to his name. Now, he owns a multi-million-dollar global business

Meet Nitin Seth, the Co-Founder and CEO of Screen Magic (SMS Magic), a messaging leader…

4 days ago

Building smarter: AI, the ultimate tool transforming an old-age industry

In this Brains Byte Back, we sit down with Hari Vasudevan, founder and CEO of…

4 days ago

When AI Goes Rogue: 8 Lessons from Implementing LLMs in the Healthcare Industry that Could Save the Future

By Santosh Shevade, Principal Data Consultant at Gramener – A Straive company All pharmaceutical companies…

5 days ago

Digital Public Infrastructure will enable public, private entities to control your access to essential goods, services & mobility

Digital Public Infrastructure is a top-down agenda coming from unelected globalists, bureaucrats, and their partners…

2 weeks ago

Open Source Claims to Be a Meritocracy—So Why Are Companies Buying Their Way In?

Imagine that you are a maintainer of a widely used open source project relied upon…

2 weeks ago