Categories: Technology

National AI Research Institutes launches with $200M in grants for societal benefit

Establishing trust with our alchemical, algorithmic associates

The National Artificial Intelligence Research Institutes is launching today to fund $200 million in grants over the coming years, which shall go towards benefiting society.

“Sustained R&D investments are needed to advance trust in AI systems to ensure they meet society’s needs”

Building trustworthy AI is at the top of the funding priorities for the new program led by the National Science Foundation (NSF) and supported by a whole bunch of government agencies and their many acronyms.

While the Defense Advanced Research Projects Agency (DARPA) has been at work on a similar program for national defense, the new NSF program will focus on innovation in six key areas that would affect society at large:

  1. Trustworthy AI
  2. Foundations of Machine Learning
  3. AI-Driven Innovation in Agriculture and the Food System
  4. AI-Augmented Learning
  5. AI for Accelerating Molecular Synthesis and Manufacturing
  6. AI for Discovery in Physics

Together, these six areas shall prove to have a profound impact on our lives in the future — from the management of our food supply to curing and preventing diseases and making giant leaps in our understanding of our place in the cosmos.

But first! we have to be able to build trustworthy relationships with our alchemical, algorithmical associates.

Establishing Trust With Our Alchemical, Algorithmic Associates

“Sustained R&D investments are needed to advance trust in AI systems to ensure they meet society’s needs and adequately address requirements for robustness, fairness, explainability, and security,” according to the National Artificial Intelligence Research and Development Strategic Plan: 2019 Update.

France Cordova

NSF Director France Cordova stated, “Advances in AI are progressing rapidly and demonstrating the potential to transform our lives.

“This landmark investment will further AI research and workforce development, allowing us to accelerate the development of transformational technologies and catalyze markets of the future.”

With the support of the NSF, USDA, NIFA, DHS, S&T, DOT, FHWA, and the VA, the program’s reach, is, shall we say, quite broad, and it is likely to trickle into just about every facet of society.

Read More: White House AI strategy focuses on jobs, innovation, civil liberties

We often read about “trust” in news related to AI, but we could just as easily be talking about control. Our brightest minds have been hard at work creating incredible advances in technology at speeds so fast, that not even the developers can predict what results their algorithms are capable of producing.

The last thing anybody wants is Prometheus being let loose in machine learning systems, especially when the primordial flame of knowledge exhibits “unexpected behavior” as was the case last month, when OpenAI announced that its AI had figured out how to win at hide and seek by breaking the simulated laws of physics.

Read More: AI breaks simulated laws of physics to win at hide and seek

At least we know we can’t trust or control everything we program, and this is very good news for people who worry about a Terminator Skynet scenario. We know this could happen, so we’re not going to let it. At least we think we won’t… for now.

For now, we are trying to make the AI show its work, so we can try to figure out what it’s doing and why. There have been instances; however, when these algorithms made up their own language and started communicating in ways that no human could understand.

“Truly trustworthy AI requires explainable AI, especially as AI systems grow in scale and complexity; this requires a comprehensive understanding of the AI system by the human user and the human designer,” the 2019 Update adds.

We haven’t reached a Terminator scenario yet, but the technology exists. Luckily, our researchers understand that they can’t predict nor control what the AI does, and so there are countless research programs looking to solve this, with the discussion of ethics being a particularly hot topic these days.

Tim Hinchliffe

The Sociable editor Tim Hinchliffe covers tech and society, with perspectives on public and private policies proposed by governments, unelected globalists, think tanks, big tech companies, defense departments, and intelligence agencies. Previously, Tim was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. These days, he is only responsible for articles he writes and publishes in his own name. tim@sociable.co

Recent Posts

Why successful expansion into Colombia relies on a tailored approach: Expert insights with Alan Gongora

Expanding into new geographic locations is one of the most effective ways to drive growth…

12 hours ago

Can Bitcoin Be the Key to Ending Perpetual War?

Every now and then, I stumble upon posts such as these here and there: And,…

2 days ago

The Coming AI Winter: How Physics May Be Leading the Way

Winter(Physics) is Coming It now looks like Large Language Models running on the GPT technology…

2 days ago

Top 15 LatAm tech journalists and editors of 2024

Latin America’s tech industry is booming, with innovative new startups popping up across the region.…

2 days ago

G20 announces initiative to crackdown on climate change disinformation

The Global Initiative for Information Integrity on Climate Change claims to 'safeguard those reporting on…

2 days ago

How GPUs, widely used in gaming, are helping doctors get a better look inside us

In the late 19th Century, physicians began inserting hollow tubes equipped with small lights into…

3 days ago