The Golem Protocol:
AI, War, and the Illusion of Control
by Iain Overton | June 17, 2025

In the Jewish folklore of 16th-century Prague, Rabbi Judah Loew ben Bezalel fashioned a man out of clay. This golem, brought to life by the Hebrew word אמת (emet, meaning “truth”) inscribed on its forehead, was intended to defend the Jewish ghetto from anti-Semitic violence.
It obeyed at first, powerful and tireless. But over time, the golem grew unruly. Its strength became a threat. To deactivate it, the Rabbi removed the first letter of emet, leaving מת (met) or “death.” With one stroke, truth became mortality, and the clay man collapsed.
This was no mere tale of magical protection. The golem’s narrative is a parable of hubris – of overreach. A warning from early modernity about the dangers of giving life to that which lacks a soul. The golem was not evil, but indifferent. It followed orders without understanding their consequences. And therein lay its danger. It was not the horror of malevolence, but the horror that comes from blind and total obedience.
We should fear, the parable tells us, any creature of logic born without wisdom or conscience. For that creature, born under the premise of being a saviour, becomes a threat. Not because it rebels, but because it performs far too well.
From Myth to Machine: The Golem Reborn in Code
In the age of artificial intelligence, the golem walks again. This time, it is not built from clay but from algorithm and digital data – or, more specifically, from our digital data. And it is fast entering the world of the physical. We are seeing it fly overhead in drones. We glimpse it in quadruped, dog-like machines crawling through warzones. We see its logic embedded in targeting systems that transform pattern recognition into decisions of life and death.
This should give us pause – not only because of what it is, but because of what it’s made from. Alongside code and silicon, this golem is also constructed from hubris. We hear it in the language of military strategists who promise “enhanced lethality,” in engineers boasting of “autonomous mission execution,” in defence scientists lauding AI’s ability to “reduce pilot cognitive load.”
Beneath that vocabulary lies an ancient myth in modern dress: the dream of creating a servant so obedient, so capable, that it will kill without hesitation, and without remorse. The real danger is not that the machine might one day say “no,” but that it will never say anything at all. That it will kill – or, as one analyst warned, bring us to the brink of nuclear war – “subtly and without warning.”
Governments are quietly investing in AI weapon systems that operate faster than human response times. From sensor fusion to automated targeting, the promise is speed, efficiency, and precision. The UK Ministry of Defence has already laid out frameworks for AI-enabled battlefield decision-making.
As the British Navy’s First Sea Lord declared in 2023:
“It is causing us to reimagine warfare, creating dynamic new benchmarks for accuracy, efficiency and lethality.”
What is being rolled out is not an AI that deliberates, but one that calculates; not a thinker, but a sorter. Kill lists backed by neural networks.
Obedience Without Conscience: The Ethics of Autonomous Killing
To those in industry, this sounds like progress. But to those focused on the civilian costs of war, it is chilling. Delegating lethal decisions to machines, however incrementally, further severs the fraying thread of accountability in armed conflict.
Who is to blame when a wrong target is hit? The software vendor? The data analyst? The procurement officer? Autonomous weapons allow militaries to operate with plausible deniability coded into their command chains.
The fog of war becomes the firewall of war.
This is not simply a technical debate. As philosopher Matt Segall has argued, the machines we build are shaped by the gods we imagine. If we model AI on a god of control – distant, omniscient, punitive – our machines will reflect that logic. They will become surveillance drones, algorithmic judges, death-dealing platforms. But if we imagined intelligence as collaborative, creative, and bound by vulnerability, perhaps AI could serve more as a midwife to peace than an angel of death.
Defence ministries are not in the habit of contemplating theology, or golems for that matter. They are concerned with tactical advantage. And in this realm, AI is a seductive weapon. It promises to eliminate friction: no fatigue, no hesitation, no guilt. But in doing so, it eliminates something essential. It reduces moral weight. There is a profound difference between a soldier who chooses to kill and a machine that is programmed to do so. One bears the burden of conscience, the other executes code.
Already we have seen what happens when algorithms are handed the reins. In Gaza, Yemen, and Afghanistan, metadata has guided lethal strikes – often with devastating mistakes. A man making too many calls, a heat signature near a suspected site, a behavioural anomaly in a drone feed: these have sufficed to justify missile strikes. Layering AI atop these processes is not a step toward wisdom. It is an acceleration toward unaccountability.
The logic of the golem is at work. Machines made to serve become dangerous not because they rebel, but because they obey too well. In seeking security through automation, we risk building systems that reflect our fears, not our values – amplifying error while insulating it from scrutiny. The more we exalt efficiency, the more we sacrifice ethics. And the more we insist that our machines are “just tools,” the more they become arbiters of life and death.
The questions we must ask are urgent: What future do we imagine for AI in war? One in which machines help reduce harm, verify casualties, support truth commissions and post-conflict recovery? Or one in which they select, strike, and vanish – answerable to no one?
The golem never woke up. It was never conscious. It merely did what it was told. And that, precisely, is what made it so dangerous.
Iain Overton is a British investigative journalist and the Executive Director of Action on Armed Violence.
