Categories
SS2 December 2025

INCREASING TRUST IN ARTIFICIAL INTELLIGENCE IN MILITARY DECISION CONTEXTS. GENERATIONAL AND COGNITIVE IMPLICATIONS

Florin TUDORACHE

 

Abstract: With the growing integration of artificial intelligence (AI) across civil and military domains, the question of how to foster appropriate trust in AI becomes critical, especially in high-stakes environments such as military operations. Drawing on generational shifts in technology adoption, cognitive biases, and security risks in military settings, this paper develops a framework for enhancing trust in AI while guarding against overreliance and data vulnerabilities. We argue that trust must be calibrated via training, transparency, human-machine teaming, and robust data governance. We identify both short-term and long-term hazards, such as skill fade, corrupted data, and adversarial tampering. We propose policy and educational interventions to preserve critical thinking, assure data integrity, and maintain human agency in AI-augmented warfare.
Sustained collaboration between technical specialists and operational leaders is essential to ensure that AI systems support sound judgment and mission effectiveness over time. The paper concludes with recommendations for institutional design, future research, and the cultivation of a culture of responsible AI trust in defence organizations.
Keywords: trust in Artificial Intelligence, automation bias, technology adoption, military decision support systems, human-Artificial Intelligence teaming, data integrity

STUDIA SECURITATIS No. 2 2025 211-225