Categories: Technology

National AI Research Institutes launches with $200M in grants for societal benefit

Establishing trust with our alchemical, algorithmic associates

The National Artificial Intelligence Research Institutes is launching today to fund $200 million in grants over the coming years, which shall go towards benefiting society.

“Sustained R&D investments are needed to advance trust in AI systems to ensure they meet society’s needs”

Building trustworthy AI is at the top of the funding priorities for the new program led by the National Science Foundation (NSF) and supported by a whole bunch of government agencies and their many acronyms.

While the Defense Advanced Research Projects Agency (DARPA) has been at work on a similar program for national defense, the new NSF program will focus on innovation in six key areas that would affect society at large:

  1. Trustworthy AI
  2. Foundations of Machine Learning
  3. AI-Driven Innovation in Agriculture and the Food System
  4. AI-Augmented Learning
  5. AI for Accelerating Molecular Synthesis and Manufacturing
  6. AI for Discovery in Physics

Together, these six areas shall prove to have a profound impact on our lives in the future — from the management of our food supply to curing and preventing diseases and making giant leaps in our understanding of our place in the cosmos.

But first! we have to be able to build trustworthy relationships with our alchemical, algorithmical associates.

Establishing Trust With Our Alchemical, Algorithmic Associates

“Sustained R&D investments are needed to advance trust in AI systems to ensure they meet society’s needs and adequately address requirements for robustness, fairness, explainability, and security,” according to the National Artificial Intelligence Research and Development Strategic Plan: 2019 Update.

France Cordova

NSF Director France Cordova stated, “Advances in AI are progressing rapidly and demonstrating the potential to transform our lives.

“This landmark investment will further AI research and workforce development, allowing us to accelerate the development of transformational technologies and catalyze markets of the future.”

With the support of the NSF, USDA, NIFA, DHS, S&T, DOT, FHWA, and the VA, the program’s reach, is, shall we say, quite broad, and it is likely to trickle into just about every facet of society.

Read More: White House AI strategy focuses on jobs, innovation, civil liberties

We often read about “trust” in news related to AI, but we could just as easily be talking about control. Our brightest minds have been hard at work creating incredible advances in technology at speeds so fast, that not even the developers can predict what results their algorithms are capable of producing.

The last thing anybody wants is Prometheus being let loose in machine learning systems, especially when the primordial flame of knowledge exhibits “unexpected behavior” as was the case last month, when OpenAI announced that its AI had figured out how to win at hide and seek by breaking the simulated laws of physics.

Read More: AI breaks simulated laws of physics to win at hide and seek

At least we know we can’t trust or control everything we program, and this is very good news for people who worry about a Terminator Skynet scenario. We know this could happen, so we’re not going to let it. At least we think we won’t… for now.

For now, we are trying to make the AI show its work, so we can try to figure out what it’s doing and why. There have been instances; however, when these algorithms made up their own language and started communicating in ways that no human could understand.

“Truly trustworthy AI requires explainable AI, especially as AI systems grow in scale and complexity; this requires a comprehensive understanding of the AI system by the human user and the human designer,” the 2019 Update adds.

We haven’t reached a Terminator scenario yet, but the technology exists. Luckily, our researchers understand that they can’t predict nor control what the AI does, and so there are countless research programs looking to solve this, with the discussion of ethics being a particularly hot topic these days.

Tim Hinchliffe

The Sociable editor Tim Hinchliffe covers tech and society, with perspectives on public and private policies proposed by governments, unelected globalists, think tanks, big tech companies, defense departments, and intelligence agencies. Previously, Tim was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. These days, he is only responsible for articles he writes and publishes in his own name. tim@sociable.co

Recent Posts

Web 3.0 and blockchain technology become force for good with MOU between Crescite and University of Notre Dame

The University of Notre Dame, the private research university based in Indiana, is demonstrating that…

15 hours ago

‘Internet of Bodies may lead to Internet of Brains’ by 2050: RAND

Transhumanism may lead to super-human capabilities for some & mind control for others: perspective The…

2 days ago

15 AI Tools Helping SME’s and Startups to Thrive in 2024

Article authored by Kevin Siskar, CEO of Finta John McCarthy first coined the term "artificial…

3 days ago

AnnotatED 2024 – A conference for educators & administrators interested in social annotation

The expanding digital annotation space is getting a new thought leadership event. Hypothesis, a leading…

3 days ago

Your metaverse identity will be central to daily life: WEF report

Virtual voodoo dolls, autonomous avatars & digital doppelgangers will be your behavior-profiling passports in the…

1 week ago

Exploring Machine Learning Text Classification with Product-Centric Approach

We’re going to pretend we have an actual product we need to improve. We will…

1 week ago