Philosophy

AI Myth-making: How Stories and Power Shape the Future of War

Author Avatar By George Semaan
Published:
AI Mythmaking: How Stories and Power Shape the Future of War

The powerful AI myth-making surrounding modern warfare suggests that artificial intelligence is an inevitable, unstoppable force that promises clean, efficient, and decisive conflicts. A new collection of essays in Minds and Machines challenges this narrative, arguing that the rise of military AI is sustained not by pure technological progress, but by a potent mix of myth, geopolitical power, and corporate interests.

In “Myth, Power, and Agency: Rethinking Artificial Intelligence, Geopolitics and War,” a team of researchers led by Raluca Csernatoni and Dennis Broeders argues that AI’s “apparent inevitability is sustained by powerful myth-making and storytelling that normalize experimentation, accelerate escalation, and mask unequal structures of extraction and domination”. This collection dissects the stories we are told about AI in conflict, from Silicon Valley boardrooms to the battlefields of Ukraine.

In this context, the authors aren’t using “myth” to mean something that isn’t real. Instead, a myth is a powerful, guiding story that shapes our beliefs and actions.

The central myth they identify is the narrative that military AI’s rise is inevitable, unstoppable, and will lead to cleaner, more efficient warfare. This “myth-making” is a tool used to shut down ethical debate, normalize risky experimentation, and entrench the power of the corporations and states who benefit from this narrative.

The Global War Lab: A Dangerous Experiment

One of the most pervasive myths is that of the “global war lab,” where conflict zones become testing grounds for new technologies. Essay contributor Marijn Hoijtink points to the war in Ukraine, which is commonly described by media, politicians, and tech companies as a “living laboratory” for military AI. This framing presents a dangerous paradox: “the more militaries and high-tech industries push for rapid prototyping and agile adaptation, the more they postpone or displace warfare’s moral and political reckoning”.

This narrative of constant experimentation, borrowed from Silicon Valley’s “act fast, take risks” ethos, blurs the line between technology development and its use in active war. It reconfigures war itself as an opportunity for innovation, creating a self-perpetuating cycle where conflict is necessary to sustain technological superiority.

The Illusion of Control and the Problem of AI Myth-making

Another critical area of AI myth-making involves human control. While developers promise that humans will remain “in the loop,” the reality is far more complex. Ingvild Bode argues that we must look beyond simple control to “human agency,” which she defines as the “socioculturally mediated capacity to act”.

Integrating AI into decision-making creates a “distributed agency” where humans and algorithms share cognitive tasks. This relationship is not a simple hierarchy. AI systems can channel and influence human behavior in unforeseen ways, creating well-documented problems like automation bias and skill degradation. The challenge, Bode notes, is that military approaches often treat these issues as simple problems to be solved with better technology, ignoring the fundamental changes to human agency itself.

The New AI Empire

The narratives driving military AI are deeply entangled with geopolitics and corporate power. Raluca Csernatoni introduces the concept of an “AI Empire,” a new form of decentralized power co-produced by tech giants and state militaries. In this new order, Big Tech companies are not just state proxies; they wield “autonomous geopolitical influence in their own right”.

These corporations control the critical infrastructure of the digital age, from data flows and compute power to the algorithms that govern our lives. The language of “disruption” is used to frame AI as a transformative force that reorders societies, justifying an “intensifying synergy of entrepreneurial capital and martial logic”. This dynamic raises profound questions about accountability when private companies become central actors in global conflicts.

The Ancient Passions of Modern War

Even with the most advanced technology, war remains a deeply human affair. Jon R. Lindsay draws a compelling parallel to Homer’s Iliad, the ancient epic of the Trojan War. He argues that the poem is not just about godlike heroes but about “the passion and suffering of men at war”. The rage of Achilles, a central theme of the poem, is driven by masculine rivalry, traumatic loss, and a thirst for vengeance.

Lindsay contends that these “Homeric passions” will inevitably shape how military AI is used. Debates about AI often focus on rational concerns like aligning machine goals with human values. However, they overlook the powerful emotions that have always driven conflict: fear, hate, status, and rage. He concludes that “the true horrors of battlefield AI come less from the misalignment of machines and more from the Homeric judgment of the warriors who wield them”.

A Call for Critical Reflection

Ultimately, the paper serves as a critical warning against the seductive narratives surrounding military AI. The authors demonstrate how AI myth-making obscures the complex ethical, political, and human realities of technologically mediated warfare. By presenting AI as a deified, inevitable force, powerful actors can stifle democratic debate and normalize automated violence.

The collection calls for a shift in perspective, urging us to look past the hype and critically examine the stories being told. It is a vital step toward “reclaiming space for critical reflection, democratic oversight, and accountable design” in an age where technology is profoundly reshaping war and peace.

Daily Neuron

Consciousness Simplified