Artificial Intelligence Applications to Deterrence

Read the full article

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Deterrence is a cornerstone of military strategy, yet its theoretical foundations, particularly the conflicting predictions of Classical (CDT) and Perfect Deterrence Theory (PDT), lack the empirical validation required to develop effective AI-enabled decision aids. To address this limitation, we propose using experimental methods, primarily the Mutual Deterrence Game (MDG), to rigorously test theoretical predictions and investigate how psychosocial factors like risk aversion and strategic reasoning (k-level thinking) influence human deterrence decisions. Second, it advocates for leveraging these empirical insights to inform the development of AI tools. We demonstrate through simulation that a Reinforcement Learning (RL) agent in an iterated MDG can successfully learn an adversary’s preferences such as aversion to inequity, challenger gains, or defender losses to model escalatory behavior over time. Recognizing the limitations and inherent escalatory biases of autonomous AI, we contend that a human-machine co-learning framework is the most effective implementation. In this model, humans and AI, including RL and Large Language Models (LLMs), work interdependently to update beliefs, generate courses of action, and model adversarial intent. This integrated research program provides the necessary theoretical and technical foundation for creating advanced decision aids that can help policymakers reduce the likelihood of conflict.

Article activity feed