Breadcrumb
Tehran, Iran - When the first US and Israeli airstrikes hit Iran on 28 February, they arrived with an intensity and precision that marked something genuinely new in the history of armed conflict.
This was not a conventional war; from the outset, artificial intelligence was embedded at every level of operations, including planning, intelligence analysis, target selection, and cyber defence.
In the first 12 hours alone, an estimated 900 strikes hit targets in Iran, including Israeli missiles that killed Supreme Leader Ali Khamenei.
Military analysts and researchers say it is one of the first modern conflicts in which AI systems play a central rather than a supporting role.
"What we are seeing today is only the beginning," Dr Ali Mahdi, a researcher at Amir Kabir University of Technology in Tehran, told The New Arab.
"Future wars will depend more heavily on AI. Decisions will be faster and more complex, particularly as the human role in certain tactical decisions is reduced."
In previous conflicts, human analysts needed weeks or months to draw meaningful intelligence from satellite imagery, surveillance data, and field recordings.
In the war on Iran, advanced algorithms process millions of data points in minutes, integrating satellite feeds, drone footage, and ground-based monitoring into machine learning models that generate a real-time operational picture of a kind no previous war has seen.
This speed has transformed how decisions are made. Commanders gain near-instant assessments of Iranian troop movements, defensive positions, and weapons transfers. The gap between identifying a target and acting on it has collapsed from hours or days to minutes.
Dr Shariatmadar Rahmati, a faculty member at Amir Kabir's Faculty of Computer Science, describes the architecture carefully.
"The algorithms analyse massive quantities of data and offer precise recommendations, but the final decision remains with the human analyst. This balance between AI and human judgment ensures that decisions are not taken in a fully automated way."
That balance, he argues, is not always maintained under pressure. "The speed that AI provides sometimes pushes analysts to make decisions in critical moments. If strict human oversight is not in place, this can lead to grave mistakes."
Models, including Anthropic's Claude, have reportedly become part of the surveillance and analysis systems used in military command, providing near-instantaneous recommendations on target prioritisation and processing millions of data points from satellites and monitoring systems.
Other reports suggest that the US administration has used AI tools to determine target priorities and execute strikes within hours, a timeline that would have been impossible without automated real-time analysis.
One of the more contested dimensions of the conflict involves the relationship between AI companies and the military establishments deploying their technology. Anthropic expressed reservations about the direct use of its systems for combat targeting, while the US Department of Defence pushed to expand its operational scope.
Independent AI expert Naim Zamani identifies what he sees as the fundamental problem.
"The system may recommend actions that are incomprehensible to humans, and these may be executed without adequate review, increasing the probability of errors and undermining compliance with ethical and international legal obligations. AI is not merely a tool. It has become part of the battlefield itself."
The legal stakes are significant. International humanitarian law requires combatants to distinguish between civilian and military targets at all times. When that distinction is made by an algorithm, questions of accountability become acute.
A deadly attack on a girls’ school in Iran - which killed at least 175 people - in the first few hours of the US-Israeli war highlights the grave consequences of these decisions. The school had reportedly been misidentified as a military site, although it is unclear if outdated intelligence or AI decision-making was involved.
"Any error in target classification can lead to human catastrophe," says Dr Rahmati. "This must be part of any strategic assessment before placing full reliance on AI in military operations."
Dr Mahdi is more direct. “Algorithms can misclassify targets, causing civilian installations to be struck as if they were military objectives. AI is not a neutral tool. It has a direct impact on the outcomes of war and on the decisions of commanders."
The war has not remained confined to physical territory. Iran has deployed AI-generated disinformation, manipulated images, and videos designed to create false impressions of events on the ground.
The United States and Israel have also deployed AI-powered defensive systems to detect and counter manipulation attempts in real time, creating what analysts describe as a multi-dimensional battlefield in which control of information becomes as strategically important as control of airspace.
"AI is not only used to identify targets on the ground," says Zamani, "but also in information warfare and digital deception. Controlling and analysing data faster than the opponent can change the media narrative and affect public morale, making the conflict with Iran a struggle that combines military theatre and digital information space."
In a conflict that has become a war of algorithms, where the capacity to analyse data outpaces the speed of physical combat, the predictive dimension adds another layer. AI systems do not only analyse current data but simulate future Iranian responses based on historical patterns and prior behaviour, allowing planners to anticipate countermoves.
These models are not infallible. Any error in their underlying assumptions can produce unintended escalation.
The war on Iran has established a precedent in the absence of any international legal framework to govern it. The questions it raises about accountability for algorithmic errors, the permissible degree of automation in lethal decision-making, and the obligations of AI companies whose systems are used in combat remain unanswered in international law.
"We need clear legal frameworks to govern the use of AI in warfare," says Dr Mahdi, "to ensure compliance with international humanitarian law and the protection of civilians."
Zamani is less optimistic about whether such frameworks will arrive in time. "The rules of war were written for a different kind of battlefield," he told TNA.
Who bears responsibility when an algorithm selects the wrong target: the company that built the model, the military that deployed it, or the commander who accepted its recommendation?
There is no settled answer, but the war on Iran has made these questions more urgent than ever.
Mahmoud Aslan is a freelance journalist and Iranian expert based between Tehran and Istanbul
This story was published in collaboration with Egab
Edited by Charlie Hoyle