The RAND Corporation wargames scenarios to see if AI could contribute to human extinction by facilitating nuclear war, creating and deploying pathogens, and malicious geoengineering.
According to three simulations conducted in the new RAND report called “On the Extinction Risk from Artificial Intelligence,” AI is currently unlikely to wipe out humanity on its own; however, it could still cause considerable devastation if it were programmed to do so, if it were given enough access to critical systems, and if it were granted decisionmaking powers.
“The capabilities of artificial intelligence (AI) have accelerated to the point at which some experts are advocating that it be taken seriously as a credible threat to human existence”
RAND, On the Extinction Risk from Artificial Intelligence, May 2025
In arriving at their conclusions, the RAND authors war-gamed three scenarios in which AI could pose an extinction-level threat to humanity and what capabilities it would require to get there — a lot of which would involve direct human intervention, naivete, and/or stupidity.
The create a true extinction threat to humanity in any of the three scenarios, the AI would require:
Let’s briefly go through the three scenarios one-by-one.
The report looks at three ways that AI could be leveraged to propagate nuclear war.
So, in order for AI to create nuclear Armageddon, humans would need to give it decisionmaking powers, the AI would have to deceive the humans that gave it those powers, and it would have to be given access to the cyber systems that control the nukes.
However, the authors conclude, “We explored various ways that AI might lead to the use of nuclear weapons, and we could find no plausible way for AI to overcome existing constraints to cause extinction.”
AI could rapidly figure out ways to create new viruses, and once given control over robotics and software systems, it could unleash these pathogens on targeted populations.
Requirements AI would need to wipe out humanity:
The authors describe a scenario in which AI could take control of drones loaded with a virus to spray on populations; however, they argue that once enough people were eliminated, “the infrastructure that AI currently depends on for functionality would almost certainly shut down.”
This is true of all scenarios at the present as well — if enough people were killed, there’d be no one left to keep the power running, the data centers flowing, or the AI working — or so the theory goes.
On AI and pathogens, the authors conclude, “We were not able to determine whether this scenario presents a likely extinction risk for humanity, but we cannot rule out the possibility.”
On the topic of malicious geoengineering, the RAND authors looked at how AI could somehow quietly manufacture and stockpile chemicals and greenhouse gases over time and then shoot them all into the atmosphere to cause widespread global heating that would kill off billions of people.
For AI to be an extinction threat in this scenario, it would need three minimum capabilities.
On malicious geoengineering without AI, the authors conclude “this scenario does present a true extinction threat and a potential falsification of our hypothesis” because “geoengineering could threaten extinction through the mass manufacturing of gases with extreme global warming potential, thereby heating the earth to uninhabitable temperatures.”
At the same, the authors admit “it is unclear how AI might be instrumental in causing this effect.”
“Realistically, a capable adversarial actor might choose to employ multiple methods together to extinguish humanity”
RAND, On the Extinction Risk from Artificial Intelligence, May 2025
“In the three scenarios examined in this study — nuclear weapons, pathogens, and geoengineering — human extinction would not be a plausible outcome unless an actor was intentionally seeking that outcome. Even then, an actor would need to overcome significant constraints to achieve that goal”
RAND, On the Extinction Risk from Artificial Intelligence, May 2025
There is also the possibility that the three scenarios could be carried out in combination, which would increase the probability of an extinction event.
For example, the authors write that “it might not be necessary for an engineered pathogen to remain more than 99 percent lethal to humans if the release of the pathogen were paired with the launch of nuclear weapons at any surviving human population centers.”
Additionally, “Societal collapse and drastic reduction in the human population will make us less resilient to future natural catastrophes. Thus, there could be a high risk of extinction even with a viable surviving human population, simply because that population will be far more vulnerable to the next catastrophe.”
However, the authors acknowledge that we humans are resilient creatures, and that even though AI could bring humanity to near-extinction, there could still be pockets of populations that survive, that build up resistances to their harsh conditions, and who could start anew to keep the human race going.
In the end, the authors conclude that “nuclear weapons, pathogens, and geoengineering — human extinction would not be a plausible outcome unless an actor was intentionally seeking that outcome. Even then, an actor would need to overcome significant constraints to achieve that goal.”
World War III may be fought with the help of cutting-edge AI, but will World War IV still be fought with sticks and stones?
Image Source: AI-generated by Grok
Since GenAI hit the public market, it’s been a natural fit for a range of…
Correct me if I’m wrong, but one of the unofficial slogans of Trump’s second administration…
While major design houses and celebrities often steal the spotlight, it’s the independent voices behind…
For modern, data-driven organizations, managing data effectively is an ongoing challenge. (more…)
A dream is often born when things get tough and tedious. While DevSecOps is a…
DPI involves giving everybody electricity & internet, making them sign up for digital ID, and…