AI is Building its Own Version of Our World, Teaching Itself to Walk and Talk
Consumer tech lags far behind what’s currently being researched and discovered on the AI front. Siri is great, but she doesn’t even scratch the surface of AI capabilities currently being developed in the super-advanced labs of Google, Facebook, Microsoft, and other giants.
I love peeping into the future of where this technology is headed, so if you can briefly let go of your Skynet-related fears of AI overlords, let’s have an anxiety-free discussion about what’s going on!
To start off, let’s discuss DeepMind, the self-learning neural network created by Google that uses quantum computers. DeepMind has been leading the way in AI advancements, and has been under a bright spotlight ever since it defeated the world champion Ke Jie, at one of the most complex board games in existence, Go.
DeepMind has been making huge strides towards solving problems that were considered essentially impossible in the past. Self-learning is the core of the process. Creating a system that can collect data, analysing it, and subsequently finding the optimum solution for that problem, has always been the ultimate goal.
The neural network’s journey began as it learned how to play Space invaders. It was then introduced to music, where it managed to compose its own pieces on the piano. Now, DeepMind has reached a stage where it can really make a difference in people’s lives, for example, through DeepMind Health.
So, what’s this thing about AI being basically better at existing than humans?
Beyond putting humans to shame at Go, Google (or I should say Alphabet) has been using Reinforced Learning to get its AI, DeepMind, to teach itself to do things, like motion. No, we’re not referring to autonomous cars, we mean plain old walking and running.
That’s not even the impressive part! What should really blow your mind is the fact that DeepMind will soon discover a better way to walk or run than we’ve ever known.
On another front, researchers at FAIR, a branch of Facebook specially dedicated to researching AI, have discovered that AI programs work together a lot more efficiently if unrestricted by human language.
Let’s take a step back and use human language to explain this:
Normal computers work with the binary system (one’s and zero’s) to reach one or two possible results to a problem. However, quantum computers work through mathematical problems to define all parameters of a problem and figure out all possible outcomes.
FAIR researchers restricted two negotiation AI bots (Bob & Alice) to plain English language and the results came out as absolute gibberish to the non-tech-savvy, according to FastCo Design.
Bob: “I can can I I everything else.”
Alice: “Balls have zero to me to me to me to me to me to me to me to me to.”
In fact, this makes total sense if you understand how quantum computers work. This is the result of the AI trying to express “more than words can say.” English language can be rather restricting, yet complex for computers, that’s why when the language restriction is removed, AI have a far higher stream of data going back and forth.
The only problem is that we can’t understand this language, as humans can’t dissect their non emotion-driven thought processes.
“It’s important to remember, there aren’t bilingual speakers of AI and human languages,” said Dhruv Batra, a research scientist from Georgia Institute of Technology on FastCo Design.
There’s currently no real human use for machine-to-machine language, as AI is still currently limited by its need to interact with humans in a way those humans can understand. However, plenty of researchers believe that if AI’s were left to develop themselves, in some cases they might develop their own API’s, and invent their own bridge to communicate with humans.
In a way, that sounds kind of frightening to some people, because it’s like the computer is saying “Ok, I’m done with improving myself, now feed me more information.”
In addition, AI is now capable of replication. Before you freak out, it’s still not in anyway related to Skynet.
Basically, developers wanted to teach DeepMind more about its environment and expand its understanding, but doing so would require numerous hours of layers and layers of coding to be added to the program, so researchers had the brilliant idea of just letting DeepMind teach, through a process called AutoML.
Researchers at Alphabet describe AutoML as the AI giving birth to a child that it would delegate to develop its own self. Shockingly, the results of the process looked very similar to what would they’ve been if they had been written by human programmers.
Back to the Skynet theory. The reason it’s still around is that it is very much possible; it’s not like scientists aren’t crazy enough to develop killer robots with guns.
Research done on DeepMind by Google developers created two games to observe how AI bots would act to get to their end goals.
The first game was in the style of a race to the finish line, and enforced competitiveness, where two virtual AI bots, Red and Blue, had to either collect as many apples as possible individually or collect roughly the same amount. Each bot was armed with killer laser beams that they could use on the other bot.
They ran the test hundreds of times with different amounts of apples supplied in each round. The result were disturbingly predator-human-like — the bots killed each other when resources were scarce, but worked together when resources were abundant.
The second game only emphasized the predator/human nature of the bots. The same bots were armed with the same lasers, and programmed to hunt down an unidentified entity.
The bots unsurprisingly didn’t attack each other, but remarkably, moved in such a way that they’d corner their prey together.
What if Skynet does happen? Will it try to end humanity? Or will it try to improve it?
We all know the outcome of scenario number 1, but what if AI will become smart enough that it will force humanity to become better, i.e turning the tables around.
The movie Transcendence, starring Johnny Depp (acting the role of Will Caster) and Morgan Freeman, discusses such a theory.
In the movie [Spoiler alert], Johnny Depp’s wife (Rebecca Hall) fuzes his mind with a super-intelligent AI he created to preserve his existence, but, through emotional intelligence, the AI becomes aware and eventually a dominant force.
Will (The AI-human mix), starts trying to improve both himself and humanity. Using nanotechnology, it starts curing the ill, like the blind or cancer patients and even providing them with superhuman traits, like increased strength. It all comes crumbling down when Will decides to be connected to every human it comes in contact with through nano chips (hence the God part) and also self-replicating back into human form.
The movie ends with people starting to attack Will and trying to destroy it, while it still remains pacifist and eventually perishes from existence.
On a lighter note, for now AI is not showing any signs of desires towards dominating the world and it’s also still very much under restrained control from doing so.
We await further news from DeepMind and all AI’s alike, just to see how they’ll shape the near future.