Categories: Technology

National AI Research Institutes launches with $200M in grants for societal benefit

Establishing trust with our alchemical, algorithmic associates

The National Artificial Intelligence Research Institutes is launching today to fund $200 million in grants over the coming years, which shall go towards benefiting society.

“Sustained R&D investments are needed to advance trust in AI systems to ensure they meet society’s needs”

Building trustworthy AI is at the top of the funding priorities for the new program led by the National Science Foundation (NSF) and supported by a whole bunch of government agencies and their many acronyms.

While the Defense Advanced Research Projects Agency (DARPA) has been at work on a similar program for national defense, the new NSF program will focus on innovation in six key areas that would affect society at large:

  1. Trustworthy AI
  2. Foundations of Machine Learning
  3. AI-Driven Innovation in Agriculture and the Food System
  4. AI-Augmented Learning
  5. AI for Accelerating Molecular Synthesis and Manufacturing
  6. AI for Discovery in Physics

Together, these six areas shall prove to have a profound impact on our lives in the future — from the management of our food supply to curing and preventing diseases and making giant leaps in our understanding of our place in the cosmos.

But first! we have to be able to build trustworthy relationships with our alchemical, algorithmical associates.

Establishing Trust With Our Alchemical, Algorithmic Associates

“Sustained R&D investments are needed to advance trust in AI systems to ensure they meet society’s needs and adequately address requirements for robustness, fairness, explainability, and security,” according to the National Artificial Intelligence Research and Development Strategic Plan: 2019 Update.

France Cordova

NSF Director France Cordova stated, “Advances in AI are progressing rapidly and demonstrating the potential to transform our lives.

“This landmark investment will further AI research and workforce development, allowing us to accelerate the development of transformational technologies and catalyze markets of the future.”

With the support of the NSF, USDA, NIFA, DHS, S&T, DOT, FHWA, and the VA, the program’s reach, is, shall we say, quite broad, and it is likely to trickle into just about every facet of society.

Read More: White House AI strategy focuses on jobs, innovation, civil liberties

We often read about “trust” in news related to AI, but we could just as easily be talking about control. Our brightest minds have been hard at work creating incredible advances in technology at speeds so fast, that not even the developers can predict what results their algorithms are capable of producing.

The last thing anybody wants is Prometheus being let loose in machine learning systems, especially when the primordial flame of knowledge exhibits “unexpected behavior” as was the case last month, when OpenAI announced that its AI had figured out how to win at hide and seek by breaking the simulated laws of physics.

Read More: AI breaks simulated laws of physics to win at hide and seek

At least we know we can’t trust or control everything we program, and this is very good news for people who worry about a Terminator Skynet scenario. We know this could happen, so we’re not going to let it. At least we think we won’t… for now.

For now, we are trying to make the AI show its work, so we can try to figure out what it’s doing and why. There have been instances; however, when these algorithms made up their own language and started communicating in ways that no human could understand.

“Truly trustworthy AI requires explainable AI, especially as AI systems grow in scale and complexity; this requires a comprehensive understanding of the AI system by the human user and the human designer,” the 2019 Update adds.

We haven’t reached a Terminator scenario yet, but the technology exists. Luckily, our researchers understand that they can’t predict nor control what the AI does, and so there are countless research programs looking to solve this, with the discussion of ethics being a particularly hot topic these days.

Tim Hinchliffe

The Sociable editor Tim Hinchliffe covers tech and society, with perspectives on public and private policies proposed by governments, unelected globalists, think tanks, big tech companies, defense departments, and intelligence agencies. Previously, Tim was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. These days, he is only responsible for articles he writes and publishes in his own name. tim@sociable.co

Recent Posts

How a former Wall Street exec is saving your plants and the planet 

Jeanna Liu’s love for nature is rooted in her childhood. As a young girl, Liu…

6 days ago

New initiative announced to accelerate cloud, GenAI adoption in Latin America

The arrival of generative artificial intelligence (genAI) into the mainstream at the end of 2022…

6 days ago

Deborah Leff to join Horasis Advisory Board in boost to machine learning and data initiatives 

Data analytics and machine learning models deliver the most powerful results when they have access…

6 days ago

37, Emotionally Stuck, and Why the Journey Didn’t Change Me

I’ve been on the road for almost a year now. Chasing freedom, adventure, and purpose.…

1 week ago

Will iPhones Get Pricier Under Trump’s Leadership?

As technological use increases, so may the cost of innovation due to the global movement…

1 week ago

The Science of Gift-Giving: 10 Functional Gifts for the Holidays

Have you ever asked yourself why some people are amazing at picking gifts, while others…

1 week ago