Categories: Technology

Pentagon formally adopts 5 ethical AI principles

Today the Pentagon announces the formal adoption of five ethical AI principles that will guide its artificial intelligence strategy for national defense.

“Ethics remain at the forefront of everything the department does with AI technology” — DoD CIO Dana Deasy

Department of Defense (DoD) Chief Information Officer Dana Deasy, along with the DoD Joint Artificial Intelligence Center (JAIC) Director Air Force Lt. Gen John N.T. Shanahan announced the formal adoption of the Pentagon’s AI ethics principles during a live event on Monday.

The DoD’s AI ethical principles that were adopted today are almost verbatim to what the Defense Innovation Board recommended last November.

  1. Responsible: DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.
  2. Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  3. Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
  4. Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.
  5. Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

“Our teams will use these principles to guide the testing, fielding and scaling of AI-enabled capabilities across the DoD”

According to the Pentagon, “These principles will apply to both combat and non-combat functions and assist the US military in upholding legal, ethical and policy commitments in the field of AI.”

“Ethics remain at the forefront of everything the department does with AI technology, and our teams will use these principles to guide the testing, fielding and scaling of AI-enabled capabilities across the DOD,” said Deasy.

In July, 2018 the DoD formed the Joint Artificial Intelligence Center as a response to ongoing advances in artificial intelligence that “will change society and, ultimately, the character of war.”

The JAIC was established to enhance the ability for DoD components to execute new AI initiatives, experiment, and learn within a common framework.

Five Pillars of the DoD AI Strategy:

  • Deliver AI-enabled capabilities that address key missions
  • Scale AI’s impact across DoD through a common foundation that enables decentralized development and experimentation
  • Cultivate a leading AI workforce
  • Engage with commercial, academic, and international allies and partners
  • Lead in military ethics and AI safety

Last year Deasy spoke before lawmakers on the role of the JAIC and what it means to be successful in AI.

“You need two things in AI to be successful,” he said at the time, adding, “you need a maniacal focus on the here and now of operationalizing getting things up-and-running, […] but you also need an intense focus on where the future is going, where the science is going, and you need a place to take that science.”

Last year, the Organization for Economic Co-operation and Development (OECD) revealed the first intergovernmental standard for AI policies, for which, 36 of the organization’s member countries signed, along with six non-member countries consisting of Argentina, Brazil, Colombia, Costa Rica, Peru, and Romania.

The document, Recommendations of the Council on Artificial Intelligence aims to aid in the formation of a global policy ecosystem that will leverage AI benefits, while keeping an eye on its ethical side.

The Recommendation identified five complementary values-based principles to responsibly administer AI that can be trusted:

  1. AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
  2. AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
  3. There should be transparency and responsible disclosure around AI systems to ensure that people understand when they are engaging with them and can challenge outcomes.
  4. AI systems must function in a robust, secure and safe way throughout their lifetimes, and potential risks should be continually assessed and managed.
  5. Organizations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.
Tim Hinchliffe

The Sociable editor Tim Hinchliffe covers tech and society, with perspectives on public and private policies proposed by governments, unelected globalists, think tanks, big tech companies, defense departments, and intelligence agencies. Previously, Tim was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. These days, he is only responsible for articles he writes and publishes in his own name. tim@sociable.co

View Comments

Recent Posts

Can Bitcoin Be the Key to Ending Perpetual War?

Every now and then, I stumble upon posts such as these here and there: And,…

3 hours ago

The Coming AI Winter: How Physics May Be Leading the Way

Winter(Physics) is Coming It now looks like Large Language Models running on the GPT technology…

3 hours ago

Top 15 LatAm tech journalists and editors of 2024

Latin America’s tech industry is booming, with innovative new startups popping up across the region.…

6 hours ago

G20 announces initiative to crackdown on climate change disinformation

The Global Initiative for Information Integrity on Climate Change claims to 'safeguard those reporting on…

8 hours ago

How GPUs, widely used in gaming, are helping doctors get a better look inside us

In the late 19th Century, physicians began inserting hollow tubes equipped with small lights into…

18 hours ago

Top Five Trends Shaping Gaming in 2025

This year wasn’t exactly what the video gaming industry expected — it declined by 7%…

2 days ago