AI experts and enthusiasts are known to discuss a phrase that was once considered a dark joke: p(doom), which is the estimated probability of a species-wide catastrophe resulting from advanced AI.
Some people no longer consider such fears about AI as fringe. On June 16, an Axios opinion column cited at least 10 people have quit their jobs at major AI companies because of concerns about extinction-level threats from artificial general intelligence.
AGI is often discussed in almost religious terms as the harbinger of the “singularity,” a utopia enabled by super-powerful AI. Others think AGI is bound to turn against its creators.
p(doom) estimates from AI leaders including Musk and Altman
Even more consequential, some of the tech executives making AI happen speak about (p)doom loudest.
Last week, OpenAI CEO Sam Altman wrote about his belief that “Humanity is close to building digital superintelligence,” a stage he considers a precursor to AGI.
Anthropic CEO Dario Amodei said (p)doom stands at 10-25%, according to an interview with podcaster Liron Shapira.
Elon Musk said in 2024 that there was a 20% chance AI would destroy humanity.
In June 2023, Microsoft CEO Satya Nadella told Wired that runaway AI could be “a real problem.” However, he said, the solution is to keep humans in charge of the context in which AI is used.
“I think about the first set of categories in which we should use these powerful models where humans unambiguously, unquestionably are in charge,” Nadella said. “And so as long as we sort of start there, characterize these models, make these models more safe and, over time, much more explainable, then we can think about other forms of usage, but let’s not have it run away.”
Axios reporters Mike Allen and Jim VandeHei said the realization of (p)doom would be driven by the development of AGI. They warned that an international AI “arms race” could accelerate the threat, as nations rush to outpace one another by removing “any and all AI restraints.”
p(doom) is not always accepted as a reliable scale
It’s possible (p)doom is too imprecise to be accurate, and that it is more of a mindset than a calculable outcome. It may also be a countdown to something that will never happen. A survey by the Association for the Advancement of Artificial Intelligence in 2025 found that 76% of AI researchers see AGI as an unrealistic goal.
For Allen and VandeHei, the primary concern about AI is that it might spark a technological arms race, in which case, p(doom) becomes a metaphor for any advanced tool capable of mass destruction. Others suggest visions of doom are, in fact, grim marketing from AI executives, whose products loom more powerfully in the mind’s eye if consumers believe they might destroy the world.
Read eWeek’s coverage of the growing concerns around AI chatbots and mental health risks, which explores how emotionally responsive systems may pose unintended psychological harm.