Government and Policy

IARPA wants AI to ‘identify crucial overlooked info, automatically generate comments’ for intel reporting

A ChatGPT-like chatbot for US spy agencies? perspective

A ChatGPT-like chatbot for US spy agencies? perspective

IARPA is putting together its REASON program that aims to help analysts identify crucial overlooked pieces of information while automatically generating comments to aid their intelligence reporting.

Through its upcoming Rapid Explanation, Analysis and Sourcing ONline (REASON) program, the US Intelligence Advanced Research Projects Activity (IARPA) is looking to develop an AI system that can “help intelligence analysts solve national security puzzles by identifying crucial overlooked pieces of information and showing ways they fit together.”

REASON will assist and enhance analysts’ work by pointing them to key pieces of evidence beyond what they have already considered and by helping them determine which alternative explanations have the strongest support” — IARPA REASON program

According to the program description, “REASON will develop technology that analysts can use to discover additional relevant evidence (including contrary evidence) and to identify strengths and weaknesses in reasoning.” 

Additionally, “REASON aims to develop novel systems that automatically generate comments enabling intelligence analysts to substantially improve the evidence and reasoning in their analytic reports.”

Performance rating will be based on how effective teams are able to develop technology that helps analysts “discover valuable evidence, identify strengths and weaknesses in reasoning, and produce higher quality reports.” 

REASON aims to develop novel systems that automatically generate comments enabling intelligence analysts to substantially improve the evidence and reasoning in their analytic reports” — IARPA REASON program

While the REASON program description makes no reference to OpenAI’s ChatGPT chatbot, there are some similarities with what IARPA is trying to achieve.

For example, you can ask ChatGPT to summarize or make inferences on just about any given topic, and it can comb its database to give you an explanation that sounds convincing.

However, those explanations can also be completely false.

As OpenAI describes it, “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.”

Even so, one of the goals of IARPA’s REASON program is to point analysts “to key pieces of evidence beyond what they have already considered,” and a chatbot similar to ChatGPT would be able to achieve that particular goal.

REASON “will function in the same manner as an automated grammar checker but with a focus on evidence and reasoning” — IARPA REASON program

In the hands of the intelligence community, and with a lot of additional training and tweaking, this type of AI could point analysts in directions that they might not have considered, and from there they would be able to assess the validity of the AI’s “reasoning” manually.

In the end, IARPA is looking for a tool aid investigations, not to replace the analyst or write automatically-generated reports on its own.

As IARPA puts it, “REASON is not designed to replace analysts, write complete reports, or to increase their workload. The technology will work within the analyst’s current workflow. 

“It will function in the same manner as an automated grammar checker but with a focus on evidence and reasoning.”

Dr. Steven Rieber

Heading up the REASON program will be Dr. Steven Rieber, who joined IARPA as a program manager in 2014.

Dr. Rieber focuses on areas of scientific research that include forecasting and rational judgment and decision-making.

He has led several IARPA programs including:

  • ACE (AGGREGATIVE CONTINGENT ESTIMATION)
  • CREATE (CROWDSOURCING EVIDENCE, ARGUMENTATION, THINKING AND EVALUATION)
  • FOCUS (FORECASTING COUNTERFACTUALS IN UNCONTROLLED SETTINGS)
  • MOSAIC (MULTIMODAL OBJECTIVE SENSING TO ASSESS INDIVIDUALS WITH CONTEXT)
  • HFC (HYBRID FORECASTING COMPETITION)

Prior to joining IARPA, Dr. Rieber worked at the Office of the Director of National Intelligence (ODNI) Office of Analytic Integrity and Standards where he served as an analytic methodologist, introducing new methods and training to the IC’s analytic workforce. 

A Proposers’ Day meeting will be held on January 11, 2023 to introduce the REASON program to potential proposers and to provide information on technical requirements and program objectives.

To attend, participants must register prior to January 6, 2023.

Tim Hinchliffe

The Sociable editor Tim Hinchliffe covers tech and society, with perspectives on public and private policies proposed by governments, unelected globalists, think tanks, big tech companies, defense departments, and intelligence agencies. Previously, Tim was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. These days, he is only responsible for articles he writes and publishes in his own name. tim@sociable.co

Recent Posts

From the Dot-Com Bust to the Age of AI: Nisum’s 25-Year Playbook for Sustainable Success

Imtiaz Mohammady, founder and CEO of global technology consulting firm Nisum, doesn’t fit the Silicon…

11 hours ago

Japan moves to build the first 1-million-qubit quantum computer through new industry partnership

The birth of quantum mechanics was accidental, as most scientific discoveries go. Working from the…

2 days ago

New partnerships accelerate digital health as AI continues to redefine orthopedics

The convergence of AI, specialized software, and clinical expertise is creating a new paradigm in…

3 days ago

Deduction Raises $2.8M To Launch “Taylor, CPAI,” an AI Agent Aiming To Fix America’s Tax Bottleneck

The IRS just confirmed that Direct File — the agency’s short-lived attempt to offer a…

3 days ago

Credentialing conference to welcome Sara Ross, sponsor Kryterion this month in Phoenix

I.C.E. Exchange, long regarded as one of the country's leading credentialing conferences, announced that its…

3 days ago

‘We must fight all attempts to undermine climate action, regardless the actor’: UN at COP30

Criticizing UN policies is now considered to be dangerous disinformation for impeding progress on Agenda…

4 days ago