GROW YOUR TECH STARTUP

Effects of AI on nuclear strategies, weaponizing tech, and why wars are fought

April 24, 2018

SHARE

facebook icon facebook icon

The desire for the powers that be to weaponize every technology ever developed has given rise to the threat of AI with regards to nuclear war without ever questioning why wars are fought in the first place.

The technology that goes into many of our basic household items such as microwaves, stereos, and even water hoses, has been weaponized through directed-energy microwave signals that can fry missile controls, long range acoustic devices and infrasonic devices that can “cause behavior changes at frequencies too low to be audible,” and water cannons that have been forceably dispersing crowds of protesters for decades.

Intelligence agencies in particular have been implemented in planning to weaponize everything from smart phones and TVs to the weather and even modern art.

If a technology exists, chances are it has been weaponized, and in many cases it was probably conceived by the military first before it ever became a commercial product (i.e. the internet, duct tape, and GPS).

It is with the same mentality that Artificial Intelligence (AI) is being weaponized, and a new report by the RAND Corporation says that even the perceived threat of AI with regards to nuclear warfare could be more dangerous than the technology itself.

“The effect of AI on nuclear strategy depends as much or more on adversaries’ perceptions of its capabilities as on what it can actually do,” the report reads.

This is based on the idea that was observed by Alfred T. Mahan in 1912 when he said, “force is never more operative than when it is known to exist but is not brandished.”

If a nation has the capability of using AI as part of a nuclear strategy, but does not intend use it, how can its adversary be sure?

Some of the main concerns surrounding AI in warfare, nuclear or otherwise, is that it makes pre-emptive strikes all the more likely and that AI could one day achieve superintelligence and will no longer controlled by humans.

Lets take look at the first scenario about pre-emptive strikes, followed by superintelligences, and finally a look into why wars are fought.

AI as a paradigm shift in nuclear war

Since the end of the Second World War, defense systems have been prioritized to deter attacks rather than actually responding to them after the fact. This has been the model for stability for the past 73 years, but that paradigm is now shifting with the rise of AI and machine learning.

That long-held stability, according to the World Economic Forum (WEF) Global Risk Report 2017, will see a shift towards Automatic Weapons Systems (AWS) and their attacks “will be based on swarming, in which an adversary’s defense system is overwhelmed with a concentrated barrage of coordinated simultaneous attacks.”

What is alarming about this technology is that it disregards the human capacity to want to prevent attacks before they start, which is key for international diplomacy. Instead, defense systems won’t be playing a game of diplomatic chess, but rather they will be responding to constant swarming attacks specifically designed to find every weakness and exploit it to the fullest.

According to the WEF report, these swarm attacks risk “upsetting the global equilibrium by neutralizing the defense systems on which it is founded. This would lead to a very unstable international configuration, encouraging escalation and arms races and the replacement of deterrence by pre-emption.”

The Rand report also mentioned something along the same lines with respect to the Cold War.

“During the Cold War, both the United States and the Soviet Union begrudgingly accepted the condition of mutual assured destruction (MAD)—the premise that any all-out attack would be met with an apocalyptic retaliatory strike ensuring that both societies would be destroyed. MAD was a condition, rather than a strategy— one that both superpowers hoped to escape if possible.”

However, with the introduction of AI into the equation, the MAD condition gets a little more complicated.

“AI could undermine the condition of MAD and make nuclear war winnable, but it takes much less to undermine strategic stability. AI advancements merely need to cast doubt on the credibility of retaliation at some level of conflict. Major nuclear powers, such as the United States, Russia, and China, have a shared interest in maintaining the credibility of central deterrence, but they seek regional advantages in pursuit of what they regard as their core strategic interests.”

Pre-emptive strikes through an AI swarm attack would not “think” in terms of deterrence as you and I would. It would completely wipe out its adversary, so it would no longer be a threat. How’s that for a deterrence?

If the simple fact of knowing that a nation has nuclear capabilities backed by Artificial Intelligence systems is enough to cause alarm by the mere perception of it, whether it is used or not, what would happen if the AI acts on its own accord?

Autonomous Superintelligence in Nuclear Warfare

“With superintelligence, AI would render the world unrecognizable and either save or destroy humanity in the process,” the RAND report states. However, experts still disagree about the national security implications of AI.

According to the RAND report, these experts fall into three categories, Complacents, Alarmists, and Subversionists:

  • Complacents: these tend to believe that producing an AI capable of performing the types of tasks that would destabilize the nuclear balance is sufficiently difficult that it is unlikely to be achieved.
  • Alarmists: these hold the opposite view, that an AI could be capable of certain tasks but should not be included in any aspect of nuclear war.
  • Subversionists: these focus on an adversary’s ability to alter, mislead, divert, or otherwise trick the AI, which could prove either stabilizing or destabilizing.

Some experts go as far as to suggest that “a future AI system could essentially be the arms control regime, monitoring compliance and adjudicating violations without human input.”

The idea of an AI superintelligence is one that is considered, but also one that is usually dismissed in the AI and military defense communities.

“Superintelligence does not seem to be viewed as imminent or inevitable by the majority of experts in AI, but many supporters believe it merits attention because of the extreme nature of its costs and benefits, even if the likelihood of its occurrence is low.”

The RAND report concluded that “AI has significant potential to upset the foundations of nuclear stability and undermine deterrence by the year 2040, especially in the increasingly multipolar strategic environment.”

One thing that was made perfectly clear is that nobody can predict which scenario will come true:

  • the benign AI that is completely under control of humans,
  • the AI that is used as an adviser but is incapable of action,
  • the perceived threat of AI being used for nuclear war by a government that has no intention of using it by its fearful adversaries, or
  • the pre-emptive superintelligence that has no regards for deterrence but will start a nuclear war on its own accords.

The least terrible outcome, according to the report, is one where “if the nuclear powers manage to establish a form of strategic stability compatible with the emerging capabilities that AI might provide, the machines could reduce distrust and alleviate international tensions, thereby decreasing the risk of nuclear war.”

All things considered, the one question that nobody ever asks is, “Why are wars fought?” Most concentrate on how to stop war, how to prevent war, or how to win wars, but the answer to question that resides in the deepest chambers of the human heart rarely surfaces because people are not willing to search within their own souls to find the answers that they already possess.

Why are wars fought?

P.D. Ouspensky, protégé of the philospher George Gurdjieff, outlined his esoteric reasoning for why wars are fought in his book “In Search of the Miraculous,” published in 1949.

According to Ouspensky’s conversations with Gurdjieff, “Wars cannot be stopped. War is the result of the slavery in which men live.”

So long as men act like machines and do not think for themselves by first knowing themselves, they will always be slaves to outside forces, including war and those who advocate it.

In order to stop war, man must first gain inner freedom. This happens on an individual level and not at peace conferences, which according to Ouspenksy, exemplify “laziness and hypocrisy.”

“The first reason for man’s inner slavery is his ignorance, and above all, his ignorance of himself. Without self-knowledge, without understanding the working and functions of his machine, man cannot be free, he cannot govern himself and he will always remain a slave, and the plaything of the forces acting upon him.”

This is why all ancient teachings and mystery schools first demand of the pupil, “KNOW THYSELF,” for without knowing how you as an individual think or behave, you cannot possibly begin to change how others think or behave.

If you play sports but have a broken arm, you cannot attempt to help your team win until you fix yourself first.

The causes of war, according to the teachings of Ouspensky and Gurdjieff, come from both within ourselves and from without. The outside world can be understood by studying the inner self.

If we can learn why we do what we do; if we can remember ourselves; and if we can liberate ourselves from external forces such as governmental narratives, the opinions of our peers, or what a group of AI experts say about nuclear war, then maybe we can begin to develop our consciousnesses to the point where that consciousness becomes a collective.

Until then, while we remain slaves of our own devices, our own devices will continue to propagate war. Our own devices will continue to do what they have always done, rule over us by keeping us in an invisible cage that we never knew existed.

The most effective means of control is the conviction that we are free.

The entire RAND report is full of well-thought and intriguing information, but it also pre-supposes that war is inevitable, that we cannot control ourselves, and that we will still have this threat of nuclear war and AI for decades to come. It does not and cannot offer any answers as to why wars are fought or why the nuclear threat will always permeate.

The so-called experts are all part of the machine that begins with inner slavery. “Emancipate yourselves from inner slavery, none but ourselves can free our minds,” sang Bob Marley.

“It’s a never-ending battle for a peace that’s always torn” sang another Bob, Dylan that is, and to reference yet another musical source on how wars continue to last for a thousand years is the notion of the Universal Soldier.

He’s the one who gives his body
as a weapon to a war
and without him all this killing can’t go on
He’s the universal soldier and he
really is to blame
His orders come from far away no more
They come from him, and you, and me
and brothers can’t you see
this is not the way we put an end to war

SHARE

facebook icon facebook icon

Sociable's Podcast

Trending