Technology

AI governance and the future of humanity with the Rockefeller Foundation’s senior VP of innovation

An interview with the Rockefeller Foundation’s Senior VP of innovation on society’s future with AI

Zia Khan
Zia Khan, Senior VP of Innovation at the Rockefeller Foundation
3.11Kviews

As the progenitors of artificial intelligence, how we care for and nurture this paradigm-shifting technology will determine how it grows up alongside humanity.

There are many paths ahead for AI and society, and depending on which ones we follow, we may find ourselves on a road to peace and prosperity or one towards a dark dystopia, with several gray areas in-between.

Zia Khan, Rockefeller Foundation
Zia Khan

“We need to now create a new institution that can continue being the gardener for AI because AI is going to leave home soon, and we hope it becomes a productive member of society” — Zia Khan

Zia Khan, Senior VP of Innovation at the Rockefeller Foundation, tells The Sociable that AI will be deeply integrated in the entire human experience and how we choose to govern it is how we will determine our future alongside it.

While the Bretton Woods agreements gave birth to the rule-making institutions of the World Bank and International Monetary Fund, the Rockefeller Foundation is looking “to develop a practical rule-making Bretton Woods-inspired framework to govern AI.”

In October, the Foundation brought together some of the brightest technologists, economists, philosophers, and thinkers who would come away inspired to create a collection of ideas and calls to action in single report: “AI+1: Shaping Our Integrated Future,” based on their discussions.

“The conversation wasn’t always easy,” said Khan, “but at the core, it was a fantastic conversation, and the area we landed on was the need for governance for AI.”

If left unchecked, AI could be governed by a select few elitists with their own agendas, or the AI itself could assume more autonomy towards artificial superintelligence, so who governs AI, how they govern, and on whose authority they do so are all serious issues facing humanity’s future with this game-changing technology.

“AI is a teenager who is about to leave home […] The teenager is starting to express its personality now” — Zia Khan

I put the question to Khan that if he could personify AI as a child and humans as its parents, what stage of life would AI be in right now? He indulged.

“If I were to guess, I would say AI is a teenager who is about to leave home,” he said.

“When it was in the lab, the scientists were more or less providing for AI, feeding it and caring for it.

“The teenager is starting to express its personality now — it’s a little rebellious. We saw some applications that weren’t great. Some issues are coming like facial recognition that we know we need to deal with — but it’s about to leave home, in my view.

“I think it’s about to have this explosive proliferation into society,” the Rockefeller senior VP added.

AI may be likened to a teenager right now, but unlike humans, its growth will be exponential and at lightning speed.

“What’s really interesting about technology is that we learn more about humans as we understand technology” — Zia Khan

Continuing with the parenting metaphor, do we want to care for our artificial offspring like carpenters — defining all the rules early on and following the plan — or do we want to be like gardeners, allowing the algorithms to flourish within a set framework while trying to nurture them and maintain boundaries?

“My view of it is that we need to now create a new institution that can continue being the gardener for AI because AI is going to leave home soon, and we hope it becomes a productive member of society, but there’s a lot of ways people can go when they leave home,” said Khan.

For the Rockefeller Foundation senior VP, a new institution should be created to govern AI, but what would that look like?

Should the future of AI governance be held to a democratic vote of the people, or should it be placed under the stewardship of philanthropists, technologists, or other organizations with deep pockets and agendas?

“We need some political mechanism to decide what are the goals that we want as a society when AI is incorporated” — Zia Khan

While Khan admits that he doesn’t have all the answers on who should be behind the institutions to govern AI, he is certain that they do need to exist.

Going back to the teenager metaphor, he says, “When someone leaves home, there’s lots of things they can do. They can go to university. They can nod-off. They can be an entrepreneur […] but we still expect them to follow some basic laws around goals that we see as a society.

“We need some political mechanism to decide what are the goals that we want as a society when AI is incorporated in that, and then, how do we ensure that the technology meets those goals?”

And that is one of the biggest debates going around artificial intelligence circles right now and highlighted in the AI+1 report: rules-based governance or outcome-based?

Focus too much on the rules, then you can have unexpected outcomes. A few years back, Microsoft had to kill its AI chatbot Tay after it turned into a foul-mouthed racist in less than 24 hours, and more recently OpenAI created a virtual game of hide and seek, but the AI unexpectedly broke the program’s simulated laws of physics to win.

By focusing on outcomes, the rules can bend and flex within a specific framework governed and guided by what the Rockefeller Foundation senior VP sees as a need for a new institution.

“I think that AI is overestimated in some cases and underestimated in other cases” — Zia Khan

At present, there are a lot of misconceptions about what AI can and cannot do, but as Khan points out, the more we study AI, the more we find out about ourselves.

“What’s really interesting about technology is that we learn more about humans as we understand technology,” he said.

“For example, you still don’t have a robot that can really open a door. Someone said once that when the killer robots come, all you have to do is close the door. You see all these crazy videos of robots doing flips and gymnastics — it’s a pretty simple problem relatively speaking — but friction?! they can’t handle it.”

He added that “it’s in studying robots that we learned our sense of touch is about a thousand times more sensitive than we thought before — similarly with our hearing and similarly with our smell.”

But when it comes do decision making, right now AI is really good at the intuitive tasks that we don’t think much about like recognizing languages, images, and counting things.

Human consciousness, on the other hand, keeps our minds occupied on many thoughts while juggling a plethora of emotions simultaneously in any given moment.

“As we understand AI better, we’re actually understanding human consciousness” — Zia Khan

That’s something, according to Khan, that AI can’t do right now, and being able to manage multiple thought processes is like an “executive function” that only people possess at present.

“As we understand AI better, we’re actually understanding human consciousness, and we’re understanding the role of emotion in helping with our cognition,” he said.

“These are the interesting frontiers we’re learning about the human mind and human body as AI progresses.”

The more we understand machines, the more we understand ourselves, and many companies working with AI are applying what they’ve learned and developed to directly benefit society in truly unique ways.

And there are some groups that have figured out that their AI solutions for one industry could prove beneficial in another.

For example, the Rockefeller Foundation works with a group called DataKind — “a fantastic organization” that has “an army of volunteer data scientists who want to apply their skills to social problems,” says Khan.

“They identify some social problems, and they get volunteer teams to help develop tools and applications.

The Rockefeller senior VP cited DataKind’s work in Haiti as an example where the team was able to optimize routes for waste disposal while maximizing pickups using AI, which in turn could be applied to community health workers in Africa who can better optimize their routes between communities.

“Anytime we can find something where one solution can be applied to another problem, it just really increases the efficiency of how we can solve all the challenges that we’re trying to solve,” said Khan.

“All of these AI systems have a problem around bias, and that’s something we’re really starting to worry about” — Zia Khan

While algorithms can be redistributed to serve multiple purposes, problems arise when they pass along inherent biases in the code.

“All of these AI systems have a problem around bias,” says Khan, adding, “that’s something we’re really starting to worry about. In many ways, these tools can just reproduce and amplify the human biases that we have.”

The Rockefeller Foundation recently launched the $4 million Lacuna Fund aimed specifically at correcting the gaps and biases in data for AI solutions in order “to mobilize labeled datasets that solve urgent problems in low-and-middle-income contexts globally.”

“The Lacuna Fund is meant to identify where are there opportunities where we can fund labeled datasets that round-out the training data available to algorithms, so that those algorithms can train themselves and remove the bias,” said Khan.

“COVID has laid bare a lot of the really deep and important problems” — Zia Khan

As AI permeates every industry and facet of society, bias will be a main issue to tackle, but moving beyond biases, this technology has the power to help make sure every human on earth is fed, clothed, and sheltered, depending on how its used and governed.

The arrival of the coronavirus pandemic has accelerated the discussion on how AI can best serve humanity and society at large.

For Khan, “Something like the COVID crisis gives us the opportunity to rethink big paradigm shifts.”

“In some way, COVID has laid bare a lot of the really deep and important problems, and I think it has heightened the urgency to think about new solutions,” he said.

“The current urgency of this crisis is demanding new thinking, and I think there are opportunities to deploy and apply AI to help in those cases.

“That’s going to help us learn about what AI can do, and hopefully we’ll keep an eye on the risks and manage those risks,” he added.

“The disruption that’s been created by COVID on so many different fronts gives us the opportunity rethink really major paradigms” — Zia Khan

AI will be a technology that cuts across society, and the Rockefeller senior VP believes that AI governance will be directly linked to economics.

“I think there’s a linkage between how we think about regulating AI and a lot of the thinking that’s going on with people in economics,” he said.

“I think people are realizing that we need a new form of economics. The neo-liberal economic paradigm of maximizing shareholder value, not accounting for the cost and nature, etc., just isn’t working.

“I think we have to do some hard thinking around what is the value of data, how are we accounting for the value of data, and I think that will lead to how we think about regulating and managing AI, but also the broader economic rules, and market rules, and the role of government. I think these will be more tightly coupled going forward,” he added.

“How we think about managing AI will be coupled with how we think about economic models” — Zia Khan

For Khan, “The disruption that’s been created by COVID on so many different fronts gives us the opportunity rethink really major paradigms, and how we think about managing AI will be coupled with how we think about economic models.”

The AI teenager is about to leave home. Will it go off and learn to do what is best for society, or will its own experiences shape it into a rebellious force of destruction?

The way forward, according to the Rockefeller Foundation’s senior VP of innovation, is to create a framework for governance that guides AI towards a prosperous future for humanity.

Tech arms race ‘will give corporations, governments the ability to hack human beings’: Yuval Harari at WEF

Digital Immortality and the Book of the Dead

1 Comment

  1. I feel uncomfortable comparing AI to human consciousness. AI makes me fearful for lost autonomy. Human consciousness is autonomous. Autonomy is the capacity to make an informed, uncoerced decision. Autonomous organizations or institutions are independent or self-governing. This means personal data is controlled by the owner and immutable therefore.

Leave a Response

Tim Hinchliffe
Tim Hinchliffe is the editor of The Sociable. His passions include writing about how technology impacts society and the parallels between Artificial Intelligence and Mythology. Previously, he was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. tim@sociable.co