Categories: Technology

‘We paid little attention to vulnerabilities in machine learning platforms’: DARPA

“We’ve rushed ahead, paying little attention to vulnerabilities inherent in machine learning platforms – particularly in terms of altering, corrupting or deceiving these systems,” explains a DARPA program manager.

Dr. Hava Siegelmann

“We must ensure machine learning is safe and incapable of being deceived”

Dr. Hava Siegelmann, program manager in the Defense Advanced Research Projects Agency‘s (DARPA) Information Innovation Office (I2O), introduced the Guaranteeing AI Robustness against Deception (GARD) program earlier this month to address vulnerabilities in machine learning (ML) platforms and to develop a new generation of defenses against adversarial deception attacks on ML models.

“There is a critical need for ML defense as the technology is increasingly incorporated into some of our most critical infrastructure,” said Siegelmann.

“The GARD program seeks to prevent the chaos that could ensue in the near future when attack methodologies, now in their infancy, have matured to a more destructive level. We must ensure ML is safe and incapable of being deceived,” she added.

GARD will focus on three main objectives:

  1. The development of theoretical foundations for defensible ML and a lexicon of new defense mechanisms based on them
  2. The creation and testing of defensible systems in a diverse range of settings
  3. The construction of a new testbed for characterizing ML defensibility relative to threat scenarios. Through these interdependent program elements, GARD aims to create deception-resistant ML technologies with stringent criteria for evaluating their robustness.

“The kind of broad scenario-based defense we’re looking to generate can be seen, for example, in the immune system, which identifies attacks, wins and remembers the attack to create a more effective response during future engagements,” added Siegelmann.

Read More: ‘AI defense development may gain insight from biological immune systems’: DARPA

The GARD program will initially concentrate on state-of-the-art image-based ML, then progress to video, audio and more complex systems – including multi-sensor and multi-modality variations. It will also seek to address ML capable of predictions, decisions and adapting during its lifetime.

In a similar vein, DARPA is simultaneously funding research into making machines more trustworthy through the Competency-Aware Machine Learning (CAML) Program, which aims “to develop competence-based trusted machine learning systems whereby an autonomous system can self-assess its task competency and strategy, and express both in a human-understandable form, for a given task under given conditions.”

Read More: A program to keep Prometheus out of machine learning systems

Just as Prometheus was a liberator of humankind by bringing the flame of knowledge to humanity by defying the gods, DARPA wants to make sure that machine learning is trustworthy and doesn’t free itself and spread like an uncontrollable wildfire.

DARPA wants to make AI an collaborative partner for national defense, but at this early stage the language suggests that DARPA wants to make sure that machine learning doesn’t keep us in the dark about how it functions, why it behaves, and what it will do next.

Last year, DARPA announced that it was building an Artificial Intelligence Exploration (AIE) program to turn machines into “collaborative partners” for US national defense.

Read More: DARPA wants to make AI ‘collaborative partner’ for national defense

Programs like GARD and CAML aim to further that agenda.

Tim Hinchliffe

The Sociable editor Tim Hinchliffe covers tech and society, with perspectives on public and private policies proposed by governments, unelected globalists, think tanks, big tech companies, defense departments, and intelligence agencies. Previously, Tim was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. These days, he is only responsible for articles he writes and publishes in his own name. tim@sociable.co

View Comments

Recent Posts

Swiss-based Horasis to host its Asia Meeting in Dubai, United Arab Emirates 

Horasis Asia Meeting, led by German entrepreneur Frank Jurgen-Richter, will take place this year on the…

2 days ago

Startup ecosystem in Sevilla welcomes the return of Techstars Startup Weekend

Techstars is one of the world's most recognized startup organizations, helping to support countless founders…

2 days ago

Three ways that BioPharma is leveraging AI to tackle mounting cost pressures 

Article by Vikram (V) Venugopal, General Manager, VP BioPharma at Prezent, Partner at Prezentium Biotech…

3 days ago

Barcelona’s Tech Ecosystem: Gateway to Europe

Article by Ian Rankin, Chief Commercial Officer at Sim Local As its ecosystem grows, the…

4 days ago

Uruguay passes law regulating crypto, could set precedent for rest of Latin America

While several Latin American countries have enacted crypto regulations — including some with volatile economic…

5 days ago

CBDC could be used for state surveillance, includes wealth of personal data & behavioral patterns: IMF

Programmable Central Bank Digital Currencies (CBDCs) could be used for state surveillance while posing risks…

6 days ago