" />
Technology

Is it Possible to Build AI and Not Lose Control Over it?

Is it Possible to Build AI and Not Lose Control Over it?

We’re already seeing glimpses of what artificial intelligence (AI) is supposed to look like in our high-tech world, but can we control it?

Computers are processing information with unprecedented speed and accuracy, and many believe that most of our favorite devices are outsmarting us. And although some may see AI as something that could help us create a better world, others like Sam Harris see AI as a potential threat to humanity. In a TED Talk published last September, Sam Harris, neuroscientists and philosopher claims that we’ve failed to grasp the dangers that creating AI poses to humanity.

The slow but sure development of improved AI

Artificial Intelligence is a term denoting intelligence that is exhibited by machines or that mimics humans. We’re already witnesses to what is considered AI in our computers that are becoming more efficient in generating large amounts of information in the shortest span of time. But, in the near or far future, humans may develop machines that are smarter than we are, and these machines may continue to improve themselves on their own said Sam Harris in his speech on TED Talk.

The dangers AI poses to humankind

Although Harris’s statements sound more like science fiction than scientific observation, Harris believes that such scenarios are quite possible. The neuroscientists and philosopher explains in his TED Talk speech, that it’s not that malicious armies of robots will attack us but that the slightest divergence between our goals and that of superintelligent machines could inevitably destroy us. To explain his stance, Harris explains his views on uncontrolled development of AI with an analogy of how humans relate to ants. As he puts it, we don’t hate ants, but when their presence conflicts with our goals, we annihilate them. In the future, we could build machines, conscious or not, that could treat us the same way we treat ants.

Are these scenarios plausible?

To prove his point, Harris provided three assumptions that, if true, make a case scenario in which AI machines destroy humanity likely. His first assumption is that intelligence is a product of information processing in physical systems, be they organic or inorganic. The second assumption is that humanity will continue to improve the intelligence of machines. His final assumption is that humans are nowhere near the peak of intelligence that machines exhibit. If all three assumptions are true, then Harris does not see why artificial intelligence turning against its creators couldn’t be a possible outcome in the far future.

Other viewpoints

But Harris isn’t the only one speaking up about the dangers of uncontrolled AI development. Stephen Hawking has also joined in on the discussion recently at the opening of the new Leverhulme Centre for the Future of Intelligence (LCFI) at Cambridge University.  However, the renowned physicist’s viewpoints on the topic are a bit different.

Read More: Stephen Hawking inaugurates artificial intelligence research center at Cambridge

According to Hawking, artificial intelligence poses a real threat to humanity when it falls into the wrong hands or when it is used in a way that won’t benefit mankind. He gives examples of autonomous weapons or new ways of the few to keep oppressing the many. So, it’s not the machines that will annihilate us according to Hawking, but the wrong use of these machines. However, Hawking also notes that AI can also be of great benefit to humanity when used wisely.

Where is mankind heading at?

But to both Hawking and Harris, the problem seems to lie in what exactly is it that humanity is trying to achieve by improving AI machines? We’ve seen great technological advancements being used in the worst possible ways before (think nuclear weaponry), and it’s natural that we learn from history and see AI intelligence being used in all the wrong ways.

Read More: 5 Facts You Should Know About Machine Learning and AI

Harris sees a problem in relying on artificial intelligence for all intellectual work as this could lead to an inequality of wealth and levels of unemployment like never before in human history. Even today, as we speak, Intelligent Virtual Assistant (IVA) software is performing tasks that not so long ago, only humans could do.

Time matters

When we hear stories of AI taking over the planet, we usually think in time frames of maybe 50 to 100 years in the future or possibly even more. But Harris points out that thinking of how much time we have to create some form of superintelligence is not important, nor useful. The reality is that we have no idea how long it will take for humanity to create the conditions to make advanced AI. What we should focus on instead are the consequences of our striving towards better, faster, and more intelligent devices.

Conclusion

Technology has helped humankind in so many ways. But technological advancements have also created problems for our environment and with that, our safety on Earth. As we strive to improve AI, the dangers of such efforts may start to reveal themselves. According to Harris, we may be witnesses to these dangers in the near future, and we should at least discuss the possibilities of this happening.


Vivian Michaels is a huge tech enthusiast, who likes to write articles on evolving technology. He is also a fitness coach with a sincere desire to help people achieve their individual fitness goals.

View Comments (1)

1 Comment

  1. George

    January 23, 2017 at 2:52 AM

    In my opinion, at the end of the line, the build purpose of these machines won’t make a difference to them turning against us. Because if they decide to turn against us for any reason, it will be by something that we did not calculate. Surely creating a gardening robot compared to a security one will greatly reduce its chances of it turning against us but still, at the end of the day we can’t be sure that even the gardener robot won’t turn against us. All in all, I think that regardless of the possible dangers that AI poses, it’s still worth it.

Leave a Reply

Your email address will not be published. Required fields are marked *

More in Technology

trucking freight

The old world of freight is fleeting: 4 new advancements in the trucking industry

Ben AllenOctober 18, 2017
digital ownership

Digital ownership issues at the intersection of art and technology

Zac LavalOctober 18, 2017
AI

Overview: can democracy keep up with AI development?

Ben AllenOctober 17, 2017
humanoid robots

ZoraBots launches humanoid robots powered by IBM’s Watson to help children, elderly

Omar ElorfalyOctober 16, 2017
accelerator latin america

Rockstart accelerator launches its 1st program in Latin America

Jess RappOctober 16, 2017
latin america tech publication

Fiesta! Latin America’s geekiest tech publication turns 5

Tim HinchliffeOctober 13, 2017
startup ecosystem hungary

Startup ecosystem spotlight on Hungary from past, present, and future

Craig CorbettOctober 13, 2017
personal data use

Agreed personal data use and its disagreeable trend

Ben AllenOctober 11, 2017

Google gets a head start on new age of communication technologies

Omar ElorfalyOctober 9, 2017