GROW YOUR TECH STARTUP

Overview: can democracy keep up with AI development?

October 17, 2017

SHARE

facebook icon facebook icon

Implicit in Microsoft CEO Satya Nadella’s suggestion that technology companies should play a leading role in guiding society’s AI progress is the latent issue of governmental lethargy on the topic.

Nadella’s comments are the latest in a developing public discussion between business and lawmakers around how to deal with the prospect of AI’s effect on society. Having gained increasing attention, the discussion reached new prominence earlier this year when Elon Musk aired his view that AI posed a “fundamental existential risk for human civilization.”

But why is this issue up in lights, on what basis does the disagreement lie, and what is being done with regards to progress?

Why do people disagree about the potential impact of AI development?

Differences in opinion on the impact of AI and the need for imminent regulation have revolved around a variation in understanding and predictions on how quickly its effects will be felt. On one side of the fence lie those who look back to previous changes in employment such as the industrial revolution, or technology introduced thus far and recognize the current trend of change to be similar.

In other words there will be a linear increase in the introduction of new technology which the workforce will adapt to as it always has done, and there’s no need to rush in to fear-inciting discussion or inhibitory regulation.

The counter-view which favors swift action is based around Moore’s law of accelerating returns which suggests the increase in AI’s functionality will be exponential, rather than linear, and as such it will outstrip society’s ability to author its own development if we don’t get ahead of it.

Part of the difficulty proponents of this view have is that exponential growth is a counter-intuitive notion. One example which goes some way in helping to comprehend it is the football stadium analogy, which proposes you are sat on the top level of the stadium as it’s filled with water. Beginning at 1pm water is added to the stadium each minute, starting with one drop and increasing the number of drops exponentially to 2, 4, 8, 16, 32, etc. At 1.45pm the stadium would still be 93% empty and you’d be happy on your top step observing the phenomenon; four minutes later, at 1.49pm, you’d either be swimming or drowning.

There are objections to the exponential view which suggest growth is tailing off; in any case, there is nothing to suggest that the shape of technological advancement is similar to previous advancements effecting the workforce, nor that the growth won’t take off again fueled by new developments. As such, the evidence that there’s nothing to worry about is thin especially considering AI doesn’t need to “take our jobs” to cause deep global and societal fissures.

If AI only took low-skilled jobs we could be in equality-reversing territory, even before we arrive at Musk’s perfectly conceivable doomsday predictions. Forget widespread unemployment, and instead imagine what a general election campaign might look like during an unemployment rate of 15-20%, especially considering that it is arguable that technological globalization has already caused desperate and populist electoral decisions.

What are governments doing?

They’re doing something, but the question is whether it’s enough. Last year, the Obama administration set up the subcommittee on Machine Learning and Artificial Intelligence, which sounds like a strong move in the right direction. However, the charter under which it was set up describes its responsibilities as essentially to watch what’s going on and report on it, there is no mention of coordinating or introducing policies necessary to the progressive development of AI.

Around the same time, the National Science and Technology Council also released its report on preparing for the future of AI which was produced “as a contribution toward preparing the United States for a future in which AI plays a growing role… and the questions that are raised for society and public policy by progress in AI. The report also makes recommendations for specific further actions by Federal agencies and other actors.”

Using driverless cars and drones as a case study the report concludes that the US government should help set the agenda for public debate, monitor developments and adapt regulatory frameworks accordingly, support research and development, and use AI itself to better serve the public. That being said, if driverless cars are to be taken as an example of the US governments approach, there’s cause for concern as attempts to make progress have been hamstrung by ineptitude.

The US and the UK have produced driverless car guidelines, though it remains to be seen how useful they will be. And in the spirit of potentially useful but as yet untested legislation we also have the EU’s GDPR which grants citizens a “right to explanation” and the UK’s National Surveillance Camera Strategy which is purported to protect citizen’s data use. All of which goes some way to governments updating themselves according to new technology but does not satisfy any proposals for getting ahead of the curve when it comes to artificial intelligence.

What are businesses doing?

The business community is moving in a number of different directions in order to make some progress. The tech sector looks to be trying to reconcile its self-imposed social responsibility as the only truly knowledgeable entity on AI, with its desire (expressed in the Tech Against Terrorism report) for self-regulation over legislative regulation.

OpenAI, a non-profit which seeks to democratize AI development, is the company Musk chose to buy in to as part of his social-responsibility stance on tech development. It cites it’s raison d’être as “By being at the forefront of the field, we can influence the conditions under which AGI is created. As Alan Kay said, “The best way to predict the future is to invent it.””

In addition to which exist the Partnership on AI and AI Now, which are an inter-company committee and a research initiative, respectively, which cite the existing potential to create negative inadvertent consequences from well-meaning AI developments. But, as we have pointed out at The Sociable previously, these private setups seem to vindicate the idea that some form of ethical guidance is required while removing the method by which said guidance is developed from the public eye.

In a further vindication of the concern with inconsistencies within the business-world’s stated intentions, the major players are at once in outward agreement on the necessity for ethical guidance and in disagreement with each other on the same issue. These non-profit organisations are set up amid conflicts between Musk and Zuckerberg, and IBM and the government.

Conclusion

First of all it’s important to recognize that the process of developing something which has game-changing potential in a democracy is, if done perfectly, not going to be smooth. The means by which democracies achieve their ends are designed to be the product of disagreement and contradictory intentions. As such, the ability to perceive a complete mess does not equate to the identification of systemic ineptitude or nefarious methodologies. The question, as I see it, is whether the products of the perceived mess are in keeping with societal requirements of the pace of development.

So far, rising from the melee, we can observe guidelines being formed on topics like driverless cars in the US and UK, and privacy at EU level, which are certainly timely given Tesla’s progression and Google’s fine.  These might be imperfect but to my mind it’s more important to have something in place which can be iterated on than leave the sector devoid of regulation during the pursuit of perfection.

The next question is whether timely action by government will continue to keep pace. Based on current performance, it seems to have the ability to act when necessary, even if devoid of an effective overall guiding body. So while we might have cause to doubt future ability, there isn’t enough evidence to assume failure. Pressure from the public and business is what has produced governmental action thus far so democracy is displaying an ability to keep up, if only just and not in all applicable jurisdictions. Therefore only a maintenance of such pressure on our part can drive the results we need.

SHARE

facebook icon facebook icon

Sociable's Podcast

Trending