AI has on and on again, proved its worth in situations where it was tested within parameters – enabling itself to walk, talk and put human geniuses to shame.
However, in the journey towards becoming sentient, scientists are in efforts to make machines understand what they see.
Quantum computing and data feeding
Previously, we discussed how quantum computing works. Essentially, instead of using data to produce a single outcome, machines analyze every piece of code to develop all possible outcomes, allowing it to, not only be versatile, but also widen its understanding of human concepts.
However, machines still completely rely on humans to feed them all parameters of a topic, along with the desired outcomes, so AI can project the result wanted. But, what if, just like humans, machines can look at something and create its own replica of it?
In an experiment conducted by researchers from Georgia Institute of Technology, AI was commanded to recreate 2D side-scroller games, like Super Mario and Megaman. Here’s the catch though, instead of being given access the game’s code algorithm, the machine was only provided with actual footage of the game being played, along with an explanation of what it’s seeing.
Matthew Guzdial, the lead author of the paper tells the story better in an interview with The Verge.
“For each frame of the video we have a parser which goes through and collects the facts. What animation state Mario is in, for example, or what velocities things are moving at.”
“So imagine the case where Mario is just above a Goomba in one frame, and then the next frame the Goomba Is gone. From that it comes up with the rule that when Mario is just above the Goomba and his velocity is negative, the Goomba disappears.”
While The Verge’s article discusses possible outcomes in an upgrade from 2D to 3D gaming, it’s obvious that the main targets are real world applications.
What we’re trying to get at here
Humans are better at understanding situations, machines are better at computing solutions. So, let’s take the time when International Space Station Commander Barry Wilmore needed a special wrench that wasn’t at his disposal, only to have NASA email him a model for it, later to be 3D printed.
Now, imagine AI reaches a point where Commander Wilmore could just show the computer what he needed, and immediately the machine provides him with the optimum solution to his situation.
Such an elaborate model can only be explained using technologies that currently fictional, (i.e., Ironman’s J.A.R.V.I.S and Knight Rider’s Kitt). As silly as it sounds, let’s actually take some of those fictional devices’ technologies that once seemed out of our realm, and see if they sound plausible in our day and age.
In episode 26 of the original Knight Rider series, Kitt pulls a “Voice Stress Analyzer” out of its bag of tricks. Essentially recognizing disturbance in Michael’s (the main protagonist and Kitt’s driver) voice and recognizing the situation to be of an emergency nature.
Such a technology can now be easily imagined real, considering that AI can now be connected to sound level analyzers and voice recognition, and that’s what I’m getting at here.
AI is under constant progress to be sentient, because in some situations humans will need it to respond to situations without feeding it a full data algorithm on it. Life-changing technologies always start with small example; this one starts with Super Mario.