In modern doctrines, the presence of man in the decision -making loop is supposed to guarantee the ethical and strategic control of military operations. However, as artificial intelligence systems gain speed, precision and autonomy, this promise crumbles. The illusion of a human in the position of final arbitrator masks a silent shift towards dependence on algorithms, whose logic sometimes exceeds that of command.
A faster loop than humans
Contemporary armies no longer reason solely in terms of superiority of fire, but speed of execution. The famous Ooda loop (observing, orienting, deciding, acting), theorized by Colonel John Boyd, accelerates under the effect of the analysis tools doped at AI. Whoever completes faster disorganizes the opponent, takes the ascendancy and wins. In this logic, The limiting factor becomes the man himself.
AI AI targeting systems, such as Maven In the United States, drastically reduce the time between the detection of a threat and its neutralization. The human role is increasingly summed up with a express validation proposals issued by the machine. When the algorithm announces that an option has 90 % of success against 10 % for another, free will becomes formal. Human decision still exists, but its real relevance is already compromised.
Already filtered information
The myth of man “in control” is based on the hypothesis of raw information, analyzed, then transformed into a decision. Gold, The entire information chain is now pre-awarded by automated systems : radars, sensors, drones, GIS systems, predictive simulations. Human consults interfaces, increased cards, correlation tables – all produced by AI that select, prioritize, and sometimes interpret. What the decision maker sees is Already an algorithmic construction.
In return, his orders go through digital systems which can themselves modify, translate or optimize the instructions. Human action, at the entrance as at the outlet of the system, is mediated, supervised, encoded. This observation does not mean the erasure of humans, but its Derealization in the decision -making process.
The weakened ethical argument
The argument of human morality in the face of the cold indifference of the machine is often advanced to justify the presence of a human in the loop. Yet, No war crime has yet been committed by a machine. All were by men. Human, far from being an ideal moral agent, is subject to biases, emotions, political or tactical pressures.
In addition, the dilemmas placed with machines are rarely placed to humans. We tolerate human imperfection without questioning it structurally. Conversely, the slightest algorithmic error triggers a radical questioning. The double moral standard becomes a strategic handicap.
An unstable liability architecture
The progressive erasure of direct human intervention also complicates the question of responsibility. Who carries the fault if an autonomous system commits a burr? The software designer? The data supplier? The operator who has not canceled the order? The command that validated the deployment? No link is clearly identifiable. The chain of responsibility is dilutes in a technical architectureat the risk of making accountability impossible.
This legal instability is all the more critical since the armies evolve in an environment normalized by International humanitarian lawwhich requires that each military action can be judged in prism of proportionality, distinction and necessity.
The temptation of moral abandonment
As the war becomes less politically expensive – thanks to drones, robotization, and the absence of human losses – the temptation to come into conflict increases. If an army can hit hard, quickly, without sending soldiers, and without having to justify each human loss, The psychological barrier to engagement falls. AI is not only a war tool, it modifies the political economy of the decision to enter war.
This is precisely what experts in military ethics fear: a trivialization of the use of force, facilitated by the physical and moral distance of the decision -makers, reinforced by systems which they do not always understand how it works.
Towards a by design control
Faced with these risks, several tracks are explored. One of the most robust is to integrate ethics As soon as systems design (by design): transparency of algorithms, explanation of decisions, auditability, traceability. Added to this is a massive need for training for military operators in digital culture, so that The human decision remains informed, conscious and controlled.
Initiatives like that of Defense Ethics Committeein France, or the efforts of the European Defense Agency to establish shared standards, go in this direction. But they remain fragile in the face of international competitors who have neither the same doctrines nor the same scruples.
Staying in the loop is no longer enough
The presence of man in the military decision -making loop is today a political imperative more than operational reality. AI upsets the conditions of war, redefines the temporality of action and dilutes responsibility. Maintaining an illusion of control without adapting our doctrines would amount to confuse interface and sovereignty. The stake will not be so much to remain “in the loop”, as control the conditions under which this loop operates.