AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani
Falha ao colocar no Carrinho.
Falha ao adicionar à Lista de Desejos.
Falha ao remover da Lista de Desejos
Falha ao adicionar à Biblioteca
Falha ao seguir podcast
Falha ao parar de seguir podcast
-
Narrado por:
-
De:
Sobre este título
How AI interacts with nuclear deterrence may be the single most important question in geopolitics — one that may define the stakes of today’s AI race. Nuclear deterrence rests on a state’s capacity to respond to a nuclear attack with a devastating nuclear strike of its own. But some theorists think that sophisticated AI could eliminate this capability — for example, by locating and destroying all of an adversary’s nuclear weapons simultaneously, by disabling command-and-control networks, or by enhancing missile defence systems. If they are right, whichever country got those capabilities first could wield unprecedented coercive power.
Today’s guests — Nikita Lalwani and Sam Winter-Levy of the Carnegie Endowment for International Peace — assess how advances in AI might threaten nuclear deterrence:
- Would AI be able to locate nuclear submarines hiding in a vast, opaque ocean?
- Would road-mobile launchers still be able to hide in tunnels and under netting?
- Would missile defence become so accurate that the United States could be protected under something like Israel’s Iron Dome?
- Can we imagine an AI cybersecurity breakthrough that would allow countries to infiltrate their rivals’ nuclear command-and-control networks?
Yet even without undermining deterrence, Sam and Nikita claim that AI could make the nuclear world far more dangerous. It could spur arms races, encourage riskier postures, and force dangerously short response times. Their message is urgent: AI experts and nuclear experts need to start talking to each other now, before the technology makes any conversation moot.
Links to learn more, video, and full transcript: https://80k.info/swlnl
This episode was recorded on November 24, 2025.
Chapters:
- Cold open (00:00:00)
- Who are Nikita Lalwani and Sam Winter-Levy? (00:01:03)
- How nuclear deterrence actually works (00:01:46)
- AI vs nuclear submarines (00:10:31)
- AI vs road-mobile missiles (00:22:21)
- AI vs missile defence systems (00:28:38)
- AI vs nuclear command, control, and communications (NC3) (00:35:20)
- AI won't break deterrence, but may trigger an arms race (00:43:27)
- Technological supremacy isn't political supremacy (00:52:31)
- Fast AI takeoff creates dangerous "windows of vulnerability" (00:56:43)
- Book and movie recommendations (01:08:53)
Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Coordination, transcripts, and web: Nick Stockton and Katy Moore