LONDON: The Trump administration’s decision last week to drop the artificial intelligence company, Anthropic, in favour of OpenAI, disparaging Anthropic as a radical “woke company” run by “left-wing nut jobs”, reflects a deep clash between ethics and military priority. Anthropic refused to loosen safeguards on its Claude model, particularly restrictions against mass surveillance and autonomous weapons. The Pentagon, however, wanted broader operational freedom from any such moral constraints, and when Anthropic held its line, officials labelled the company a supply-chain risk and moved on. Anthropic’s competitor, OpenAI, by contrast, agreed to work with the Defence Department, offering safeguards but accepting the partnership. By doing so, the company was offered the valuable contract.
This typical “Trumpian episode” brought to the front the race to integrate AI into warfare, and nowhere is this more evident than in America’s close partner, Israel. In past wars, the pace of action was limited by human capacity, such as analysts sorting intelligence, officers debating targets, and pilots waiting for coordinates. Today, in Israel’s military campaigns in Gaza and Iran, much of that deliberation is increasingly resolved by machines. AI, once associated with recommendation engines and chatbots, is now embedded in the decision-making machinery of war, and both Gaza and Iran have become the most consequential testing grounds yet for algorithmic warfare. This is raising urgent questions about accountability, legality, and the future of conflict.
At the centre of this transformation are AI-driven targeting systems developed by the Israeli military and its elite intelligence unit, Unit 8200. These tools ingest vast quantities of surveillance data, such as phone intercepts, drone imagery, social networks, and geolocation signals, and convert them into recommendations for lethal strikes. The technology promises speed and efficiency, but critics argue it also enables a scale and tempo of violence that human oversight struggles to match. The most widely reported systems are known as “Lavender” and “The Gospel.” Together, they form a digital architecture for identifying both people and places as military targets.
Although Israel’s policy is to keep details secret, investigative reports and expert analysis indicate that Lavender is an AI-powered database designed to flag individuals suspected of links to militant groups, such as Hamas or Iran’s IRGC. At one stage of the war in Gaza, the system reportedly listed as many as 37,000 Palestinian men as potential targets. The system works by analysing patterns in data: communication networks, behavioural signals, and demographic indicators. Once a person is algorithmically classified as a suspected militant, their name may be passed along the targeting chain for potential assassination or airstrike. In theory, a human analyst remains “in the loop.” In practice, however, former intelligence officers have described a process in which analysts often approve AI-generated targets extremely quickly, sometimes with only seconds of review.
If Lavender identifies people, The Gospel identifies buildings. This system scans surveillance data to detect structures believed to house militants, weapons caches, or command centres. It then recommends bombing targets to human operators, dramatically increasing the number of potential targets. Where human analysts might previously generate dozens of targets over months, AI systems can generate hundreds within days, and the result is a radical acceleration of the targeting process. Military leaders have openly embraced this shift, describing AI as a way to overcome a long-standing operational constraint, such as the scarcity of targets relative to the Israeli Air Force’s capabilities. With algorithmic tools producing target lists at scale, this bottleneck disappears. Critics argue that the transformation has profound consequences for civilians, as AI systems rely on statistical inference, not certainty. Even with high reported accuracy, Lavender has been cited as being about 90 percent accurate in internal evaluations. The margin of error becomes significant when tens of thousands of people are flagged. If thousands of names appear on a target list, a 10 percent error rate can translate into hundreds or thousands of misidentified individuals being killed.
Human rights organizations and legal scholars warn that the speed and scale of algorithmic targeting risk undermining core principles of international humanitarian law, including the obligation to distinguish between combatants and civilians and to avoid disproportionate attacks. The ethical debate is further complicated by how AI interacts with operational policy. Investigations into the war have suggested that the Israeli military set pre-authorized thresholds for civilian casualties when targeting particular militants, meaning that certain numbers of civilian deaths could be considered acceptable collateral damage, depending on the target’s rank. When such policies are combined with automated target generation, critics argue the result can resemble what one analyst described as an “industrialisation” of targeting. Algorithms produce lists, analysts validate them rapidly, and strikes follow. The tempo of war becomes machine-paced.
Israel and its supporters reject the characterisation that AI removes human judgment. The Israeli military maintains that its systems are decision-support tools rather than autonomous weapons. Human analysts review the outputs, and commanders ultimately decide whether to authorise a strike. Proponents argue that algorithmic systems may actually reduce civilian casualties by improving precision and enabling the military to analyse far more intelligence than humans could process alone. This argument echoes a broader narrative emerging among technologically advanced militaries, from the United States to China, where defence planners increasingly see AI as essential for navigating a data-saturated battlefield. Modern surveillance systems generate enormous streams of information from drones, satellites, electronic intercepts, and sensors that cannot realistically be analysed by human teams alone. In this context, Israel’s battlefield in Gaza is often viewed by defence analysts as a preview of future warfare, one in which AI-driven targeting systems promise to fuse data streams into real-time intelligence, enabling what militaries call a “sensor-to-shooter” loop measured in minutes rather than hours.
Yet Gaza has also exposed the darker side of that transformation. One emerging concern is the opacity of algorithmic decision-making, as machine-learning systems often operate as black boxes, producing outputs without clearly explaining how those conclusions were reached. In civilian contexts, such as loan approvals or hiring decisions, this opacity can be problematic; in warfare it can be lethal. Legal accountability becomes especially complex when decisions are shaped by algorithms. If an AI system misidentifies a target and civilians are killed, who bears responsibility? The software developers who trained the model? The intelligence analysts who approved the recommendation? The commanders who authorized the strike? Or the state that deployed the system? International law has yet to fully grapple with these questions, as existing frameworks were designed for human decision-makers, not algorithmic infrastructure.
Beyond targeting, Israel is reportedly experimenting with additional AI tools to enhance surveillance and intelligence analysis. Investigations have revealed efforts by Unit 8200 to build large language models, similar to conversational AI systems that can analyse vast collections of intercepted Arabic communications. These systems are trained on billions of words from phone calls and messages collected through surveillance programs and are intended to help analysts detect patterns or extract insights from massive datasets. The goal is to automate another bottleneck in intelligence work: the interpretation of raw information. By asking an AI system questions about individuals or networks, analysts could theoretically navigate enormous troves of surveillance data far more quickly than before. But here too, critics see troubling implications. Training AI models on private communications raises serious privacy and human rights concerns, particularly when those communications come from civilian populations living under occupation or siege. Moreover, such tools blur the line between intelligence analysis and predictive policing. If an algorithm identifies “suspicious” patterns in someone’s communications, does that constitute sufficient evidence to label them a militant? And once that label is applied, how easily can it be challenged?
The Gaza and Iran wars are not the only theatres where AI-driven military technologies are shaping Israel’s strategy. Israeli defence firms and military research units have long been at the forefront of AI-enabled surveillance, drone systems, and autonomous weapons. Technologies developed in counterinsurgency operations often migrate into broader military doctrine, and eventually into global arms markets. For decades, Israel’s security environment has served as a laboratory for new technologies, from missile defence systems to cyberwarfare tools. AI appears to be the latest frontier.
The implications, of course, extend far beyond the Middle East. If algorithmic warfare proves operationally effective, other militaries will likely follow. The result could be an accelerating global race to automate the battlefield. Already, analysts warn that AI could lower the threshold for war. When machines generate thousands of targets quickly, the political and psychological barriers to launching strikes may erode. War not only becomes faster but easier to sustain. In Gaza, the human consequences of that transformation are already visible in the scale of destruction and loss of life.
While debates about military necessity and legality continue, one fact is clear: artificial intelligence is no longer a distant future in warfare. It is present, operational, and shaping decisions about life and death. The deeper question is not whether AI will be used in war; it already is. The real question is whether political institutions, legal frameworks, and ethical norms can evolve quickly enough to govern it. If they cannot, the Gaza and Iran conflicts may be remembered not only for their devastation but also as the moment when the algorithm truly entered the battlefield.
John Dobson is a former British diplomat, who also worked in UK Prime Minister John Major’s office between 1995 and 1998. He is currently a visiting fellow at the University of Plymouth.