GROW YOUR TECH STARTUP

Is it Possible to Build AI and Not Lose Control Over it?

January 21, 2017

SHARE

facebook icon facebook icon

We’re already seeing glimpses of what artificial intelligence (AI) is supposed to look like in our high-tech world, but can we control it?

Computers are processing information with unprecedented speed and accuracy, and many believe that most of our favorite devices are outsmarting us. And although some may see AI as something that could help us create a better world, others like Sam Harris see AI as a potential threat to humanity. In a TED Talk published last September, Sam Harris, neuroscientists and philosopher claims that we’ve failed to grasp the dangers that creating AI poses to humanity.

The slow but sure development of improved AI

Artificial Intelligence is a term denoting intelligence that is exhibited by machines or that mimics humans. We’re already witnesses to what is considered AI in our computers that are becoming more efficient in generating large amounts of information in the shortest span of time. But, in the near or far future, humans may develop machines that are smarter than we are, and these machines may continue to improve themselves on their own said Sam Harris in his speech on TED Talk.

The dangers AI poses to humankind

Although Harris’s statements sound more like science fiction than scientific observation, Harris believes that such scenarios are quite possible. The neuroscientists and philosopher explains in his TED Talk speech, that it’s not that malicious armies of robots will attack us but that the slightest divergence between our goals and that of superintelligent machines could inevitably destroy us. To explain his stance, Harris explains his views on uncontrolled development of AI with an analogy of how humans relate to ants. As he puts it, we don’t hate ants, but when their presence conflicts with our goals, we annihilate them. In the future, we could build machines, conscious or not, that could treat us the same way we treat ants.

Are these scenarios plausible?

To prove his point, Harris provided three assumptions that, if true, make a case scenario in which AI machines destroy humanity likely. His first assumption is that intelligence is a product of information processing in physical systems, be they organic or inorganic. The second assumption is that humanity will continue to improve the intelligence of machines. His final assumption is that humans are nowhere near the peak of intelligence that machines exhibit. If all three assumptions are true, then Harris does not see why artificial intelligence turning against its creators couldn’t be a possible outcome in the far future.

Other viewpoints

But Harris isn’t the only one speaking up about the dangers of uncontrolled AI development. Stephen Hawking has also joined in on the discussion recently at the opening of the new Leverhulme Centre for the Future of Intelligence (LCFI) at Cambridge University.  However, the renowned physicist’s viewpoints on the topic are a bit different.

Read More: Stephen Hawking inaugurates artificial intelligence research center at Cambridge

According to Hawking, artificial intelligence poses a real threat to humanity when it falls into the wrong hands or when it is used in a way that won’t benefit mankind. He gives examples of autonomous weapons or new ways of the few to keep oppressing the many. So, it’s not the machines that will annihilate us according to Hawking, but the wrong use of these machines. However, Hawking also notes that AI can also be of great benefit to humanity when used wisely.

Where is mankind heading at?

But to both Hawking and Harris, the problem seems to lie in what exactly is it that humanity is trying to achieve by improving AI machines? We’ve seen great technological advancements being used in the worst possible ways before (think nuclear weaponry), and it’s natural that we learn from history and see AI intelligence being used in all the wrong ways.

Read More: 5 Facts You Should Know About Machine Learning and AI

Harris sees a problem in relying on artificial intelligence for all intellectual work as this could lead to an inequality of wealth and levels of unemployment like never before in human history. Even today, as we speak, Intelligent Virtual Assistant (IVA) software is performing tasks that not so long ago, only humans could do.

Time matters

When we hear stories of AI taking over the planet, we usually think in time frames of maybe 50 to 100 years in the future or possibly even more. But Harris points out that thinking of how much time we have to create some form of superintelligence is not important, nor useful. The reality is that we have no idea how long it will take for humanity to create the conditions to make advanced AI. What we should focus on instead are the consequences of our striving towards better, faster, and more intelligent devices.

Conclusion

Technology has helped humankind in so many ways. But technological advancements have also created problems for our environment and with that, our safety on Earth. As we strive to improve AI, the dangers of such efforts may start to reveal themselves. According to Harris, we may be witnesses to these dangers in the near future, and we should at least discuss the possibilities of this happening.

SHARE

facebook icon facebook icon

Sociable's Podcast

Trending