Categories: Technology

National AI Research Institutes launches with $200M in grants for societal benefit

Establishing trust with our alchemical, algorithmic associates

The National Artificial Intelligence Research Institutes is launching today to fund $200 million in grants over the coming years, which shall go towards benefiting society.

“Sustained R&D investments are needed to advance trust in AI systems to ensure they meet society’s needs”

Building trustworthy AI is at the top of the funding priorities for the new program led by the National Science Foundation (NSF) and supported by a whole bunch of government agencies and their many acronyms.

While the Defense Advanced Research Projects Agency (DARPA) has been at work on a similar program for national defense, the new NSF program will focus on innovation in six key areas that would affect society at large:

  1. Trustworthy AI
  2. Foundations of Machine Learning
  3. AI-Driven Innovation in Agriculture and the Food System
  4. AI-Augmented Learning
  5. AI for Accelerating Molecular Synthesis and Manufacturing
  6. AI for Discovery in Physics

Together, these six areas shall prove to have a profound impact on our lives in the future — from the management of our food supply to curing and preventing diseases and making giant leaps in our understanding of our place in the cosmos.

But first! we have to be able to build trustworthy relationships with our alchemical, algorithmical associates.

Establishing Trust With Our Alchemical, Algorithmic Associates

“Sustained R&D investments are needed to advance trust in AI systems to ensure they meet society’s needs and adequately address requirements for robustness, fairness, explainability, and security,” according to the National Artificial Intelligence Research and Development Strategic Plan: 2019 Update.

France Cordova

NSF Director France Cordova stated, “Advances in AI are progressing rapidly and demonstrating the potential to transform our lives.

“This landmark investment will further AI research and workforce development, allowing us to accelerate the development of transformational technologies and catalyze markets of the future.”

With the support of the NSF, USDA, NIFA, DHS, S&T, DOT, FHWA, and the VA, the program’s reach, is, shall we say, quite broad, and it is likely to trickle into just about every facet of society.

Read More: White House AI strategy focuses on jobs, innovation, civil liberties

We often read about “trust” in news related to AI, but we could just as easily be talking about control. Our brightest minds have been hard at work creating incredible advances in technology at speeds so fast, that not even the developers can predict what results their algorithms are capable of producing.

The last thing anybody wants is Prometheus being let loose in machine learning systems, especially when the primordial flame of knowledge exhibits “unexpected behavior” as was the case last month, when OpenAI announced that its AI had figured out how to win at hide and seek by breaking the simulated laws of physics.

Read More: AI breaks simulated laws of physics to win at hide and seek

At least we know we can’t trust or control everything we program, and this is very good news for people who worry about a Terminator Skynet scenario. We know this could happen, so we’re not going to let it. At least we think we won’t… for now.

For now, we are trying to make the AI show its work, so we can try to figure out what it’s doing and why. There have been instances; however, when these algorithms made up their own language and started communicating in ways that no human could understand.

“Truly trustworthy AI requires explainable AI, especially as AI systems grow in scale and complexity; this requires a comprehensive understanding of the AI system by the human user and the human designer,” the 2019 Update adds.

We haven’t reached a Terminator scenario yet, but the technology exists. Luckily, our researchers understand that they can’t predict nor control what the AI does, and so there are countless research programs looking to solve this, with the discussion of ethics being a particularly hot topic these days.

Tim Hinchliffe

The Sociable editor Tim Hinchliffe covers tech and society, with perspectives on public and private policies proposed by governments, unelected globalists, think tanks, big tech companies, defense departments, and intelligence agencies. Previously, Tim was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. These days, he is only responsible for articles he writes and publishes in his own name. tim@sociable.co

Recent Posts

AI and the Dopamine Trap: How Algorithms Are Rewiring Our Social Cravings

New research shows AI companions can lift mood and teach social skills, but only when…

1 day ago

Hate speech, deepfakes & false info undermine UN work: comms chief

Hate speech is a launching point for crackdowns on narratives that impede UN agendas: perspective…

1 day ago

Making Sense brings strategic insight to the SIM Hartford Chapter

On June 4, technology executives gathered at the SIM Hartford chapter presided over by Fariba…

3 days ago

‘Digital disinformation undermines trust in climate & health authorities’: WHO Pan European Commission on Climate & Health

The PECCH is an attempt to persuade us into believing that climate & health policies…

5 days ago

WEF ‘Summer Davos’ in China to tackle transhumanism, AI & One Health agendas

The program agenda for the World Economic Forum's (WEF) 16th Annual Meeting of the New…

1 week ago

10 design and architecture writers to put on your radar this year

It’s easy to get caught up in the visuals—perfectly styled rooms, dramatic before-and-afters, bold architectural…

1 week ago