Technology

‘AI defense development may gain insight from biological immune systems’: DARPA

“Novel adversarial AI defense development may gain insight and inspiration from biological systems, such as the immune system and its interactions with bacteria and viruses,” according to a DARPA proposal.

Read More: Nature is intelligent: Pentagon looks to insects for AI biomimicry design

The research funding arm of the Pentagon is worried that “adversaries” could turn AI against its programmers by “poisoning” it into doing the biddings of those it wasn’t meant to serve.

With “poisoning attacks,” attackers deliberately influence the training data to manipulate the results of a predictive model, according to IEEE.

Read More: DARPA launches AI chemistry program to develop new molecules for military use

If we look towards natural models for an analogy, it would be like how a virus could manipulate the immune system of the host so it would attack itself, or how bacteria could change the host environment so it could grow stronger and spread.

“The lack of a comprehensive theoretical understanding of ML vulnerabilities leaves significant exploitable blind spots”

The Defense Advanced Research Projects Agency (DARPA) will hold Proposers Day on February 6, 2019 for a program aimed at developing a new generation of defenses against adversarial deception attacks on machine learning (ML) models, such as poisoning attacks and inference attacks — “where malicious users infer sensitive information from complex databases at a high level,” thus endangering the integrity of an entire database,” according to Techopedia.

DARPA’s Guaranteeing AI Robustness against Deception (GARD) program will seek game-changing research ideas to
develop theory, defenses, and testbeds leading to robust, deception-resistant ML models and algorithms.

GARD seeks to push the state-of-the-art in ML defenses beyond classification by defending via detection, location, and prediction, and beyond the standard modality of digital images by developing defenses against physical world attacks in a variety of pertinent modalities, such as video and audio.

GARD has three objectives:

  1. Develop theoretical foundations for defensible ML. These foundations will include metrics for measuring ML vulnerability and identifying ML properties that enhance system robustness
  2. Create, and empirically test, principled defense algorithms in diverse settings
  3. Construct a scenario-based evaluation framework to characterize defenses under multiple objectives and threat models such as the physical world and multimodal settings.

“The field now appears increasingly pessimistic, sensing that developing effective ML defenses may prove significantly more difficult than designing new attacks”

According to DARPA, “the growing sophistication and ubiquity of ML components in advanced systems dramatically increases capabilities, but as a byproduct, increases opportunities for new, potentially unidentified vulnerabilities.

“The acceleration in ML attack capabilities has promoted an arms race: as defenses are developed to address new attack strategies and vulnerabilities, improved attack methodologies capable of bypassing the defense algorithms are created.

Read More: DARPA wants to make AI ‘collaborative partner’ for national defense

“The field now appears increasingly pessimistic, sensing that developing effective ML defenses may prove significantly more difficult than designing new attacks, leaving advanced systems vulnerable and exposed.

“Further, the lack of a comprehensive theoretical understanding of ML vulnerabilities in the ‘Adversarial Examples’ field leaves significant exploitable blind spots in advanced systems and limits efforts to develop effective defenses.”

Tim Hinchliffe

The Sociable editor Tim Hinchliffe covers tech and society, with perspectives on public and private policies proposed by governments, unelected globalists, think tanks, big tech companies, defense departments, and intelligence agencies. Previously, Tim was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. These days, he is only responsible for articles he writes and publishes in his own name. tim@sociable.co

View Comments

Recent Posts

AI in the financial system could spell ‘the end of democracy’: Harari to BIS

Yuval Noah Harari says AI should stand for Alien Intelligence, that banks & govts should…

4 hours ago

AI logistics firm Transmetrics launches new tool for vehicle fleet managers

Trucking fleet management can be a tedious task, often involving manual spreadsheets and repetitive data…

2 days ago

The Imperative of Integrating Low Resource Languages into LLMs for Ethical AI

In recent years, the emergence of Large Language Models (LLMs) has brought about significant shifts…

5 days ago

Not Your Typical CPA Firm: A CEO on Mission to Guide Companies Through the Ever-Changing World of Tech Compliance (Brains Byte Back Podcast)

In today’s episode of the Brains Byte Back podcast, we speak with Mike DeKock, the founder…

6 days ago

‘Social problems in substituting humans for machines will be easier in developed countries with declining populations’: Larry Fink to WEF

Blackrock CEO Larry Fink tells the World Economic Forum (WEF) that developed countries with shrinking…

1 week ago

Meet Nobody Studios, the enterprise creating 100 companies amidst global funding winter 

Founders and investors alike were hopeful the funding winter would start to thaw in 2024.…

1 week ago