GROW YOUR TECH STARTUP

White House report blends ethical AI practice with military applications

October 18, 2016

SHARE

facebook icon facebook icon

The White House releases its report on the future of Artificial Intelligence with calls for ethical AI practices, and ironically, enhanced weapons systems.

The White House is dead-set on bringing Artificial Intelligence applications to almost every facet government and industry, according to a recent report by the Obama administration.

The report, “Preparing for the Future of Artificial Intelligence,” outlines the US Government’s recommendations on using AI ethically for public good while simultaneously calling for the development of a “single, government-wide policy” on autonomous and semi-autonomous weapons.

Ethical AI applications blends with Military Industrial Complex

What is striking about this report is the contrast between proposed ethical practices in AI starting with children in the education system, and the call for the United States to be the global leader monitoring the rest of the world’s progress right up to the military industrial complex.

Read More: CIA ‘Siren Servers’ can predict social uprisings 3-5 days in advance

According to the report, “The U.S. Government has several roles to play [both foreign and domestic]. It can convene conversations about important issues and help to set the agenda for public debate. It can monitor the safety and fairness of applications as they develop, and adapt regulatory frameworks to encourage innovation while protecting the public.”

While investments on AI “have already begun reaping major benefits for the public in fields as diverse as health care, transportation, the environment, criminal justice, and economic inclusion,” those investments have also been used for military purposes.

The report claims that US autonomy in weapons technologies has led to the use of safer, more humane military operations.

“The United States has incorporated autonomy in certain weapon systems for decades, allowing for greater precision in the use of weapons and safer, more humane military operations. Nonetheless, moving away from direct human control of weapon systems involves some risks and can raise legal and ethical questions.”

Should this be a big red flag? How can a weapon that is designed to kill be made safer? By killing less people? By killing only specific people? If the latest airstrikes in Syria are evidence enough, you would know that so-called “collateral damage,” or, as it should properly be called, human life and dignity, is the real tragedy of war.

Read More: #PrayForSyria trending worldwide after ‘US-led airstrike kills 60 civilians’

The fact that civilians are the real victims in any given war leads one to believe that maybe the US Government has it backwards. Maybe we do need AI to eliminate the enormous egos and the racial, ethnic, and religious intolerance that has infiltrated the government and ignited fear and discontent among its citizens?

So, while the report says that they propose working with foreign governments to monitor milestones in AI, what they most likely are doing is monitoring to make sure that the US maintains autonomy, at least when it comes to weapons.

But then again, AI might decide that we are a hopeless species and would want to wipe us all out anyway.

Ethical, open AI for public good vs AI for private and govt control 

“At present, the majority of basic research in AI is conducted by academics and by commercial labs that regularly announce their findings and publish them in the research literature. If competition drives commercial labs towards increased secrecy, monitoring of progress may become more difficult, and public concern may increase,” the report warns.

This is very similar to what is going on between the Partnership on AI and Elon Musk’s OpenAI. Both groups claim to serve the public good, but their methods and responsibilities to stakeholders differ.

OpenAI claims to be open sourced and available to the public while the Partnership on AI is a consolidation of power among Google/Deep Mind, Facebook, Microsoft, IBM, and Amazon, and aimed at becoming the self-proclaimed moral authority on the applications on AI.

Read More: Partnership on AI vs OpenAI: Consolidation of Power vs Open Source

The Partnership on AI claims to not be involved in policy making, yet has invited policymakers to its board of directors. It will be interesting to see which side the US government takes and whether it will recognize the Partnership as the ultimate authority and thus base its policy making after the tech-giant conglomeration’s findings, or turn to other groups like OpenAI.

At any rate, the report continues, “As the technology of AI continues to develop, practitioners must ensure that AI-enabled systems are governable; that they are open, transparent, and understandable; that they can work effectively with people; and that their operation will remain consistent with human values and aspirations.”

Again, the government wants open transparency, but then wants to take all that openness to regulate and govern it once they figure out what the heck they are doing with it.

US Govt Recommendations and Calls to Action across industries

The White House administration recommended 23 calls to action on the part of the government to address AI research, development, and regulation across multiple industries and sectors of government.

Some of the areas where the US government wants to develop AI include:

  • Private and Public Institutions
  • Government Agencies
  • Transportation and the Auto Industry, including automated vehicles
  • Air Traffic Control
  • Monitoring the state of AI in foreign countries
  • Impacts of AI on the workforce
  • AI and automation on the US job market
  • Ethics of AI to be taught in schools and Universities
  • Safety professionals and engineering
  • Cybersecurity
  • Economy
  • Weapons systems

As you can see, that’s a pretty wide spectrum when it comes to implementing ethical AI practices on the part of the US government, and there are still many other areas of research and regulation to explore.

While most recommendations seem like a step in an ethical direction, there is a power vacuum when it comes to which government or corporate organizations will become the ultimate moral, technological, and regulatory authorities on ethical Artificial Intelligence practices.

SHARE

facebook icon facebook icon

Sociable's Podcast

Trending