The US Intelligence Advanced Research Projects Activity (IARPA) issues a request for information (RFI) to identify potential threats and vulnerabilities that large language models (LLMs) may pose.
While not an official research program yet, IARPA’s “Characterizing Large Language Model Biases, Threats and Vulnerabilities” RFI aims “to elicit frameworks to categorize and characterize vulnerabilities and threats associated with LLM technologies, specifically in the context of their potential use in intelligence analysis.”
Many vulnerabilities and potential threats are already known.
For example, you can ask ChatGPT to summarize or make inferences on just about any given topic, and it can comb its database to give you an explanation that sounds convincing.
However, those explanations can also be completely false.
As OpenAI describes it, “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.”
But the risks posed by LLMs go well beyond nonsensical explanations, and the research funding arm for US spy agencies is looking to identify threats and vulnerabilities that might not have been fully covered in the OWASP Foundation’s recently published “Top 10 for LLM.”
Last week, UC Berkeley professor Dr. Stuart Russell warned the Senate Judiciary Committee about a few of the risks in the OWASP top 10 list, including Sensitive Information Disclosure, Overreliance, and Model Theft.
For example, Russell mentioned that you could potentially be giving up sensitive information just by the types of questions you were asking; and then the chatbot could potentially spit back sensitive or proprietary information belonging to a competitor.
“If you’re in a company […] and you want the system to help you with some internal operation, you’re going to be divulging company proprietary information to the chatbot to get it to give you the answers you want,” Russell testified.
“If that information is then available to your competitors by simply asking ChatGPT what’s going on over in that company, this would be terrible,” he added.
If we take what Russell said about divulging company information and apply that to divulging US intelligence information, then we can start to get a better understanding of why IARPA is putting out its current RFI.
But there could also be potential threats and vulnerabilities that are as-of-yet not known.
As former US Secretary of Defense Donald Rumsfeld famously quipped, “There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know.”
So, for the current RFI, IARPA is asking organizations to answer the following questions:
The primary point of contact for the RFI is Dr. Timothy McKinnon, who also manages two other IARPA research programs: HIATUS and BETTER.
Last year, IARPA announced it was putting together its Rapid Explanation, Analysis and Sourcing ONline (REASON) program “to develop novel systems that automatically generate comments enabling intelligence analysts to substantially improve the evidence and reasoning in their analytic reports.”
Additionally, “REASON is not designed to replace analysts, write complete reports, or to increase their workload. The technology will work within the analyst’s current workflow.
“It will function in the same manner as an automated grammar checker but with a focus on evidence and reasoning.”
So, in December, IARPA wanted to leverage generative AI to help analysts write intelligence reports, and now in August, the US spy agencies’ research funding arm is looking to see what risks large language models may pose.
Image by kjpargeter on Freepik
Every now and then, I stumble upon posts such as these here and there: And,…
Winter(Physics) is Coming It now looks like Large Language Models running on the GPT technology…
Latin America’s tech industry is booming, with innovative new startups popping up across the region.…
The Global Initiative for Information Integrity on Climate Change claims to 'safeguard those reporting on…
In the late 19th Century, physicians began inserting hollow tubes equipped with small lights into…
This year wasn’t exactly what the video gaming industry expected — it declined by 7%…