Categories: Technology

‘What Will You Regulate?’: Secrecy and Consequences in AI Regulation

Should governmental AI regulation become the law of the land, or should AI be privatised for corporations and entrepreneurs to do with as they please?

Elon Musk says we should regulate AI and Mark Zuckerberg disagrees. Broadly speaking, to the political left “regulation” means stopping big companies ignoring employees and consumers at the behest of their share price, while to the right it means preventing innocent innovators fueling the economy – but do these divisions apply to AI in the same way?

Musk’s request for lawmakers to step-in and regulate AI before “It’s too late” was made while talking at the National Governor’s Association, where Arizona’s Governor Ducey had follow-up question for him. Ducey questioned Musk using the old metaphor of someone introducing to the market an odourless, colourless, explosive gas. The end of the line of thinking goes that something lethal and undetectable is surely dangerous and would be banned, so we’re lucky methane (the gas in question) was introduced prior to regulation because we now have so many uses for it.

Former White House chief data scientist, DJ Patil, echoed Ducey’s sentiment during his talk to DataGiri in Mumbai, “It’s not obvious how you regulate [AI]. We know how to regulate certain types of medical works because we have a field called bioethics. That’s why it’s so important to have ethics as part of every curriculum.” He fired Musk a question, “what will you regulate?”

Read More: White House report blends ethical AI practice with military applications

And that seems to be the question around which the debate revolves. AI is so new we don’t yet know where it’s going, we just set machines little tasks to see how they do. So if we’re going to regulate it based on where it might be in the future, which will be very different specifically because of AI, where do we start?

But we seem stuck watching big-name word-tennis. Meanwhile Patil’s perfectly valid question goes unanswered, which is understandable given the debate is happening via news headlines. But it must be possible to break it down and give a fair question a fair crack. So…

As far as I know no one is disagreeing that AI could one day pose a threat to humanity, just that perhaps Musk’s comments were either alarmist or misplaced. Alarmist because AI isn’t going to appear as quickly as he thinks, or misplaced because, even if it does pose a threat, regulating something we don’t understand will be counterproductive.

Read More: AI-human hybrids are essential for humanity’s evolution and survival: Elon Musk

Regulation, in general, and according to free-market economics, puts limits on businesses and thus strangles growth, innovation and prosperity. So we should wield it with care and if in doubt, keep away from it because those are three good things.

Let’s take these two points in turn. Starting with the alarmist point.

How soon will AI be here? There are lots of estimates but realistically we don’t know. If Musk is worried about AI and Zuckerberg isn’t, that might be because Musk’s team is further ahead with their research and capabilities. Part of the problem here is that the companies chasing AI are competing in the private sector, so any knowledge they gain is proprietary.

Read More: 2021 should be very disruptive year for tech: predictions from industry leaders

They’re unable to reveal to the public what they’re working on and what they’ve found because it would be handing the information to competitors, allowing them to catch up. Even once they’ve created something they’re unlikely to announce how they did it.

This might sound odd, but we can draw a potentially instructive parallel with the Human Genome Project here. This was a public project so all the findings were to be publicly released within 24 hours, no one could patent any part of it. Had this not been the case we could have had private bodies who “own” the data for certain genes. And if we had private companies owning gene data coupled with no regulation, we might be in a situation where those companies were working on stem cell research and growing humans in labs artificially (so called “designer babies”).

Celera Corporation, a private company racing the public endeavour to sequence the human genome, planned to file patent on 6,000 genes before Clinton and Blair secured an agreement on the public release of the information. Decades later, there’s talk of selling the information. Google is an interested buyer.

It was partly because the research was being done publicly that governments were able to regulate stem cells prior to any major catastrophes. It slowed research and the US government has now relaxed the regulations slightly. But aren’t we glad we didn’t get ahead of ourselves?

At the moment, AI is heading down the private route, with the companies involved keeping their cards very close to their chests. Which means we don’t know what direction they’re taking their research in and we don’t know how much progress they’ve made. So if Musk says AI is scary and Zuckerberg says it isn’t then we can only guess as to why they’re saying that.

Read More: Partnership on AI vs OpenAI: Consolidation of Power vs Open Source

In any case, you would generally expect competitive companies to be anti-regulation, so the fact that Musk is asking for it is telling.

We just don’t know what their teams are capable of, and that leads on to the second point of whether Musk’s comments were misplaced.

If we don’t know what AI companies are working on then lawmakers have no idea where or what to regulate – they’re throwing darts blindfolded, so to speak. So Patil and Ducey are right to take a questioning stance to Musk’s view. Driverless cars, for example, will probably be quicker to brake or take evasive action than human drivers, they’ll also be less likely to make mistakes.

But. If a driverless car is headed for a crash and calculates that in every scenario someone – be it passenger, pedestrian or another driver – will probably die, what is it programmed to do? Who does it kill? Faced with this rare but unavoidable ethical hypothetical lawmakers could inadvertently slow or prevent the production of cars which are net-safer from being introduced, by regulating against the ethical conundrum and forcing AI companies around the problem, or away from it entirely.

Read More: Self-Driving Cars Might Not Be Just Around The Corner After All

That being said the car scenario is relatively short-sighted. What Musk is scared about is computers smart enough to manipulate humans using what he referred to as “deep intelligence in the network.” The kind that is not only smarter than humans but has real-time access to vast amounts of data and can manipulate information.

“Let’s say you had an AI,” Musk started, during his NGA interview, “[whose] goal was to maximise the value of a portfolio of stocks. One of the ways to maximise value would be to go long on defence, short on consumer and start a war… It could hack into [an] airline’s aircraft routing server, route [a plane] over a warzone, then send an anonymous tip that enemy aircraft is flying overhead right now.” He ended there, with a shrug that almost said “tell me I’m wrong?”

Read More: AI human cyborgs are next on Elon Musk’s agenda with the launch of Neuralink

Further, he pointed out that speed isn’t necessarily about how quickly general AI will arrive. Once it does arrive it will accelerate the rate of development. So politicians might be working on how to legislate something that arrived on the scene a few months ago, in which time these companies, now endowed with AI, have release two or three new products.

Now, Musk might think we’re closer to this than Zuckerberg but that’s actually beside the point, or points. There are four points which are, I think, the most pertinent and out of which a conclusion can be drawn:

  1. AI-powered cars and medical equipment are going to happen relatively soon, even according to Zuckerberg, and they’ll have to contend with ethical decisions.
  2. All-purpose general AI bots or “deep network intelligence” may or may not happen soon, but they will happen eventually, and they’ll have to contend with ethical decisions.
  3. The potential negative consequences of no regulation are robots killing humans, or getting humans to kill each other; and not necessarily on a small scale.
  4. We (humans) have not previously dealt with anything which has this scale of disturbing consequences – it changes the equation. So we should consider the question of whether to regulate AI as distinct and separate from previous rules, guidelines and philosophies of governance and economics.

I’ll accept the free-market argument that regulation strangles innovation. But let’s make safe mistakes over dangerous ones. I’d prefer innovation get strangled than people.

So back to Patil’s question, “What will you regulate?”

Ben Allen

Ben Allen is a traveller, a millennial and a Brit. He worked in the London startup world for a while but really prefers commenting on it than working in it. He has huge faith in the tech industry and enjoys talking and writing about the social issues inherent in its development.

View Comments

Recent Posts

Top 15 LatAm tech journalists and editors of 2024

Latin America’s tech industry is booming, with innovative new startups popping up across the region.…

19 mins ago

G20 announces initiative to crackdown on climate change disinformation

The Global Initiative for Information Integrity on Climate Change claims to 'safeguard those reporting on…

2 hours ago

How GPUs, widely used in gaming, are helping doctors get better look inside us

In the late 19th Century, physicians began inserting hollow tubes equipped with small lights into…

12 hours ago

Top Five Trends Shaping Gaming in 2025

This year wasn’t exactly what the video gaming industry expected — it declined by 7%…

2 days ago

Why data flywheels are the key to sustainable growth in 2025 

By Oren Askarov, Growth & Operations Marketing Director at SQream Becoming “data-driven” has become a…

2 days ago

Swiss-based Horasis to host its Asia Meeting in Dubai, United Arab Emirates 

Horasis Asia Meeting, led by German entrepreneur Frank Jurgen-Richter, will take place this year on the…

5 days ago