Abstract:In September 2021, the "One Hundred Year Study on Artificial Intelligence" project (AI100) issued the second report of its planned long-term periodic assessment of artificial intelligence (AI) and its impact on society. It was written by a panel of 17 study authors, each of whom is deeply rooted in AI research, chaired by Michael Littman of Brown University. The report, entitled "Gathering Strength, Gathering Storms," answers a set of 14 questions probing critical areas of AI development addressing the major risks and dangers of AI, its effects on society, its public perception and the future of the field. The report concludes that AI has made a major leap from the lab to people's lives in recent years, which increases the urgency to understand its potential negative effects. The questions were developed by the AI100 Standing Committee, chaired by Peter Stone of the University of Texas at Austin, consisting of a group of AI leaders with expertise in computer science, sociology, ethics, economics, and other disciplines.
Abstract:The potential for advances in information-age technologies to undermine nuclear deterrence and influence the potential for nuclear escalation represents a critical question for international politics. One challenge is that uncertainty about the trajectory of technologies such as autonomous systems and artificial intelligence (AI) makes assessments difficult. This paper evaluates the relative impact of autonomous systems and artificial intelligence in three areas: nuclear command and control, nuclear delivery platforms and vehicles, and conventional applications of autonomous systems with consequences for nuclear stability. We argue that countries may be more likely to use risky forms of autonomy when they fear that their second-strike capabilities will be undermined. Additionally, the potential deployment of uninhabited, autonomous nuclear delivery platforms and vehicles could raise the prospect for accidents and miscalculation. Conventional military applications of autonomous systems could simultaneously influence nuclear force postures and first-strike stability in previously unanticipated ways. In particular, the need to fight at machine speed and the cognitive risk introduced by automation bias could increase the risk of unintended escalation. Finally, used properly, there should be many applications of more autonomous systems in nuclear operations that can increase reliability, reduce the risk of accidents, and buy more time for decision-makers in a crisis.