The State of AI: How war will be changed forever
Helen Warrell, foot Investigative reporter
It’s July 2027, and China is about to invade Taiwan. Self-driving drones with AI targeting capabilities are equipped to overwhelm the island’s air defenses, as a series of crippling AI-generated cyberattacks cut off key power supplies and communications. Meanwhile, a massive disinformation campaign, launched by an AI-powered pro-China meme farm, is spreading across global social media, dousing protest over Beijing’s aggressive action.
Such scenarios have brought dystopian horror into the discussion about the use of artificial intelligence in war. Military leaders hope to create a digitally-enhanced force that is faster and more accurate than human-managed combat. But there are concerns that as AI takes on an increasingly central role, these same leaders will lose control of a conflict that is escalating too quickly and lacks ethical or legal oversight. Henry Kissinger, the former US Secretary of State, spent his final years warning of the coming catastrophe of AI-led warfare.
Accommodating and mitigating these risks is the military priority of our time – some say the “Oppenheimer moment.” One emerging consensus in the West is that decisions about nuclear weapons deployment should not be outsourced to artificial intelligence. UN Secretary-General António Guterres has gone further, calling for an outright ban on fully autonomous lethal weapons systems. It is necessary for regulation to keep pace with evolving technology. But in the midst of science fiction-fueled excitement, it’s easy to lose sight of what’s actually possible. As researchers at Harvard’s Belfer Center point out, AI optimists often underestimate the challenges of deploying fully autonomous weapons systems. It is entirely possible that the AI’s capabilities in combat are overrated.
Anthony King, director of the Institute for Strategy and Security at the University of Exeter and the main proponent of this argument, points out that rather than replacing humans, AI will be used to improve military vision. He insists that even if the nature of war is changing and remote technology improves weapons systems, “the complete automation of war itself is an illusion.”
Of the three current military use cases for AI, none involve complete autonomy. It is being developed for planning and logistics, cyber warfare (in sabotage, espionage, hacking and information operations); and most controversially – for targeting weapons, an application already in use on the battlefields of Ukraine and Gaza. Kiev forces use AI software to guide drones capable of avoiding Russian interference as they approach sensitive sites. The IDF has developed an AI-assisted decision support system known as Lavender, which has helped identify what is around them. 37,000 potential human targets within Gaza.
FT/MIT Technology Review | Adobe stock
There is clearly a risk that the Lavender database will replicate the biases of the data on which it was trained. But military personnel have biases, too. One Israeli intelligence officer who used Lavender claimed that he had more confidence in the fairness of the “statistical mechanism” than in the grief-stricken soldier.
Even the tech optimists who design AI weapons deny that specific new controls are needed to control their capabilities. Keith Dear, a former UK military officer who now runs strategic forecasting company Cassi AI, says current laws are more than enough: “You’re making sure there’s nothing in the training data that might throw the system off course… when you’re confident that you’re using it – and you, the human commander, are responsible for anything they might do that goes wrong.”
Don’t miss more hot News like this! Click here to discover the latest in AI news!
2025-11-17 16:30:00



