
AI has entered the war room, and it’s not going anywhere anytime soon, according to experts.
Despite President Donald Trump telling federal agencies and military contractors to stop doing business with Anthropic, the US military reportedly used the company’s AI model, Claude, in its attack on Iran, according to The Wall Street Journal.
Now, some experts are raising concerns about the use of AI in war operations. “The AI machine makes recommendations on what to focus on, which is actually faster in some ways than the speed of thought,” Dr. Craig Jones, author of The Lawyers of War: US, Israel and the Targeting Areas, which examines the role of military lawyers in modern warfare, said The Guardian.
In a conversation with luckJones, a Newcastle University lecturer in war and conflict, said AI greatly speeds up the “kill chain,” compressing the time from initial target identification to final destruction. He said the US-Israel attack on Iran, which resulted in the death of Ayatollah Ali Khamenei, would not have happened without AI.
“It’s impossible, or almost impossible, to do it that way,” Jones said luck. “The speed it does, and the size and the number of strikes, I think is AI-enabled.”
The Pentagon has sought the help of AI companies to speed up and improve war planning, entering into a partnership with Anthropic in 2024 that collapsed last week thanks to disagreements over the use of the company’s AI model, Claude. But OpenAI easily inked a deal with the Pentagon, and Elon Musk’s xAI reached a deal to use the company’s AI model, Grok, in the classified system. The US Army is also using data-mining firm Palantir’s software for AI-enabled insights for decision-making purposes.
AI on the battlefield
Jones said the US Air Force has used “speed of thought” as a benchmark for decision-making for years. He said the time that passed from gathering intelligence, such as aerial reconnaissance, to executing a bombing mission could be as long as six months during WWII and the Vietnam War. AI has greatly compressed that timeline.
The key role of AI tools in the war room is to quickly analyze large amounts of data. “We’re talking terabytes and terabytes and terabytes of data,” Jones said, “everything from aerial imagery, human intelligence, internet intelligence, mobile phone tracking, anything and everything.”
dr. Amir Husain, co-author of Hyperwar: Conflict and Competition in the AI Centurysaid AI is being used to compress the US military’s decision-making framework, known as the OODA loop—an acronym for observe, orient, decide, and act. He said that AI is already playing a major role in observation, or in the interpretation of satellite and electronic data, tactical-level decision-making, and the “act” phase, especially through autonomous drones that must operate without human guidance when signals are jammed. Some of the drones are really copycats of Iran’s own autonomous Shahed drone.
AI is showing up on other battlefields as well. Israel is reportedly using AI to recognize Hamas targets during the Israel-Hamas war. and autonomous drones are on the frontlines of the Russia-Ukraine war, with Russia and Ukraine using some variation of autonomous technology.
Multiplication of risks
However, Jones notes several concerns around AI-enabled warfare. “The problem when you add AI to that is you multiply, by orders of magnitude I would argue, the levels of error,” Jones said.
To be sure, Jones said, human error exists with or without AI technology, citing the 2003 US invasion of Iraq as a conflict built on faulty intelligence gathering. But he said AI could exacerbate such mistakes thanks to the sheer volume of data the technology analysed.
There is also a set of ethical questions raised by AI warfare, particularly the question of accountability, something Husain said the Geneva Convention and the laws of armed conflict require states to adhere to. With AI blurring the lines between machine and human-level decision-making, he said the international community must ensure that human responsibility is given to all actions on the battlefield.
“The laws of armed conflict require us to blame the person,” Husain said. “Humans must be held accountable no matter what level of automation is used on the battlefield.”







