Categories: Technology

US Army is putting all its AI eggs in one basket, AI in every system

Creating an AI ecosystem in the military requires trust, which has yet to be proven

The fact that AI can behave unexpectedly isn’t preventing the US Army from putting all its AI eggs in one basket, with the technology spanning every battlefield system.

Making AI trustworthy is the goal of every developer working on this technology, and so far it hasn’t been proven to be fully deserving of our trust.

On Monday; however, Army AI Task Force (AAITF) Director Brig. Gen. Matthew Easley said that AI “needs to span every battlefield system that we have, from our maneuver systems for our fire control systems to our sustainment systems to our soldier systems to our human resource systems and our enterprise systems.”

Read More: ‘AI needs to span every battle system we have’: US Army AI Task Force director

If the Army is that dedicated to making AI prevalent in every battlefield system, it must believe that it will be able to trust and control the AI, which is a major struggle among developers.

For example, just last month OpenAI announced that it created an AI that broke the simulated laws of physics to win at hide and seek.

Taking what was available in its simulated environment, the AI began to exhibit “unexpected and surprising behaviors,” “ultimately using tools in the environment to break our simulated physics,” according to the team.

Now imagine if an AI were to exhibit “unexpected and surprising behaviors” within a military setting. What could possibly go wrong?

The AAITF director said, “We see AI as an enabling technology for all Army modernization priorities — from future vertical lift to long range precision fires to soldier lethality,” which makes me question, have they already solved the trust issue with AI and just haven’t told us yet, or is that something they’re still working on?

We do have proof that the military has been working on trustworthiness through projects carried out by the Defense Advanced Research Projects Agency (DARPA).

Launched in February DARPA’s Competency-Aware Machine Learning (CAML) Program aims “to develop competence-based trusted machine learning systems whereby an autonomous system can self-assess its task competency and strategy, and express both in a human-understandable form, for a given task under given conditions.”

DARPA acknowledged that “the machines’ lack of awareness of their own competence and their inability to communicate it to their human partners reduces trust and undermines team effectiveness.”

In other words, the military is aware that AI can act unpredictably, and it wants to make sure that Prometheus isn’t let loose in machine learning systems.

Read More: Keeping Prometheus out of machine learning systems

Just as Prometheus was a liberator of humankind by bringing the flame of knowledge to humanity by defying the gods, DARPA wants to make sure that machine learning is trustworthy and doesn’t free itself and spread like an uncontrollable wildfire.

Last year, DARPA announced that it was building an Artificial Intelligence Exploration (AIE) program to turn machines into “collaborative partners” for US national defense.

When DARPA launched the Guaranteeing AI Robustness against Deception (GARD) project, Program Manager Dr. Hava Siegelmann admitted, “We’ve rushed ahead, paying little attention to vulnerabilities inherent in machine learning platforms – particularly in terms of altering, corrupting or deceiving these systems.”

“We must ensure machine learning is safe and incapable of being deceived,” she added.

The Army has gone all-in on AI, essentially putting all of its eggs in one basket in its mission to develop an “AI ecosystem for use within the Army,” which will encompass just about every aspect of battlefield systems.

With the centralization and consolidation of power being placed on AI, surely they’ve figured out a way to make it trustworthy — haven’t they?

Tim Hinchliffe

The Sociable editor Tim Hinchliffe covers tech and society, with perspectives on public and private policies proposed by governments, unelected globalists, think tanks, big tech companies, defense departments, and intelligence agencies. Previously, Tim was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. These days, he is only responsible for articles he writes and publishes in his own name. tim@sociable.co

View Comments

Recent Posts

With space economy projected to reach $944 billion, Nisum and DeepSpace Technology partner-up to drive AI innovation to infinity and beyond

With the rise of GenAI dominating conversations in recent years, it can be easy to…

8 hours ago

US should study Ukraine war, dominate AI drone tech: Eric Schmidt

AI is an all purpose tool, for good or ill, and Schmidt is placing his…

3 days ago

CBDCs, digital currencies could lead to cash extinction: IMF paper

All signs point towards a cashless society, whether through convenience or coercion -- carrot or…

3 days ago

Financial resilience weakens in SMBs, putting need for robust reporting techniques in the spotlight 

The month-end close process refers to a set of accounting processes to review, record and…

4 days ago

RAND wargames to see if AI could wipe out humanity with pathogens, geoengineering & nukes

The RAND Corporation wargames scenarios to see if AI could contribute to human extinction by…

6 days ago

Prezent marks another major milestone as Dr. Charlotte Owens appointed to newly announced Senior Executive Board

Since GenAI hit the public market, it’s been a natural fit for a range of…

6 days ago