National AI Research Institutes launches with $200M in grants for societal benefit
Establishing trust with our alchemical, algorithmic associates
The National Artificial Intelligence Research Institutes is launching today to fund $200 million in grants over the coming years, which shall go towards benefiting society.
“Sustained R&D investments are needed to advance trust in AI systems to ensure they meet society’s needs”
Building trustworthy AI is at the top of the funding priorities for the new program led by the National Science Foundation (NSF) and supported by a whole bunch of government agencies and their many acronyms.
While the Defense Advanced Research Projects Agency (DARPA) has been at work on a similar program for national defense, the new NSF program will focus on innovation in six key areas that would affect society at large:
- Trustworthy AI
- Foundations of Machine Learning
- AI-Driven Innovation in Agriculture and the Food System
- AI-Augmented Learning
- AI for Accelerating Molecular Synthesis and Manufacturing
- AI for Discovery in Physics
Together, these six areas shall prove to have a profound impact on our lives in the future — from the management of our food supply to curing and preventing diseases and making giant leaps in our understanding of our place in the cosmos.
But first! we have to be able to build trustworthy relationships with our alchemical, algorithmical associates.
Establishing Trust With Our Alchemical, Algorithmic Associates
“Sustained R&D investments are needed to advance trust in AI systems to ensure they meet society’s needs and adequately address requirements for robustness, fairness, explainability, and security,” according to the National Artificial Intelligence Research and Development Strategic Plan: 2019 Update.
NSF Director France Cordova stated, “Advances in AI are progressing rapidly and demonstrating the potential to transform our lives.
“This landmark investment will further AI research and workforce development, allowing us to accelerate the development of transformational technologies and catalyze markets of the future.”
With the support of the NSF, USDA, NIFA, DHS, S&T, DOT, FHWA, and the VA, the program’s reach, is, shall we say, quite broad, and it is likely to trickle into just about every facet of society.
We often read about “trust” in news related to AI, but we could just as easily be talking about control. Our brightest minds have been hard at work creating incredible advances in technology at speeds so fast, that not even the developers can predict what results their algorithms are capable of producing.
The last thing anybody wants is Prometheus being let loose in machine learning systems, especially when the primordial flame of knowledge exhibits “unexpected behavior” as was the case last month, when OpenAI announced that its AI had figured out how to win at hide and seek by breaking the simulated laws of physics.
At least we know we can’t trust or control everything we program, and this is very good news for people who worry about a Terminator Skynet scenario. We know this could happen, so we’re not going to let it. At least we think we won’t… for now.
For now, we are trying to make the AI show its work, so we can try to figure out what it’s doing and why. There have been instances; however, when these algorithms made up their own language and started communicating in ways that no human could understand.
“Truly trustworthy AI requires explainable AI, especially as AI systems grow in scale and complexity; this requires a comprehensive understanding of the AI system by the human user and the human designer,” the 2019 Update adds.
We haven’t reached a Terminator scenario yet, but the technology exists. Luckily, our researchers understand that they can’t predict nor control what the AI does, and so there are countless research programs looking to solve this, with the discussion of ethics being a particularly hot topic these days.