Nowadays, AI does all the work that was previously done by teams of analysts who would combine sensor feeds, identify anomalies, rank targets, and prescribe actions. This is not necessarily evil, as it can enhance situational awareness and decrease human workload, but it does give rise to a long-lasting conceptual mistake of viewing autonomy in a functional role as autonomy of the entire system. An autonomously filtering radar is not comparable to a weapons system that autonomously determines whether to shoot or not. What is ethically and legally morally responsible does not exist, and by conflating the two, responsibility lines are erased, and morally and ethically critical decisions are passed on to code incapable of perceiving intent and weighing ethical trade-offs.
Policy has also endeavored to keep up to the pace since the existing Department of Defense Directive and additional guidance is a necessary measures towards responsible usage. But the doctrine, which comes but once in a year, can never keep pace with the engineering progress. The current AI is more modular, more joined up, and more easily armed against us compared to many of the rules being written. Such a technological fact requires increased periodicity of doctrinal refreshing, effective operations architecture, high auditability standards, real-time intervention, and open lines of decision-making that would allow commanders to rebuild the rationale why a machine has performed as it did.
There has to be an operational design to indicate that various functions have various ethical weights. The process of automated data aggregation and sensor classification can and, in fact, must have a wider range of autonomy since the error made in such a case is usually rectifiable and not lethal. In contrast, any action that intentionally effectually reduces the range of human judgment target designation, engagement suggestions or lethal impactors should not be beyond an apparent human approbation regime. This incremental method retains the benefits of speed and scale, and it puts ultimate moral and legal accountability in its proper place, to human leaders who must be answerable to law and the people.
Correction and interruption are only to be mentioned as engineering aspects to technical systems and not an appendix. The hard mission cutoffs, predictive error detection, and operator accessible overrides have to be introduced on platforms, the basic alert has to be such that a human can detect and respond to it during stress. The other very important aspect is the development of a verifiable logistics that captures sensor data, model projections, decision-making reasoning, and operator interventions. Without those records, the after-action review, legal compliance and institutional learning lack strength but are the anecdotes of lessons, rather than the evidence.
Engineering fixes are no more significant than institutional ones. Technological preparedness, enemy capacity, and new risks should constantly be evaluated by permanent and cross-cutting working groups in organizations that use AI extensively. The value of interoperability will be achieved through a shared technical vocabulary between services, partners, and industry. Doctrine and procurement review cycles that are regularly scheduled, along with red-teaming and legal-ethical auditing, will ensure that policy does not become petrified into a shroud of bad design decisions.
Hard tradeoffs should also be noted where the introduction of latency and friction by the human intervention will be operationally important in other cases. But latency need not necessarily be an imperfection; it may be a protection that gives time moral argumentation, evaluation of proportions, and legal judgment to keep pace with machine pace. Such human capabilities cannot be reduced at the moment to probabilities or optimization functions. The decision to save them has nothing to do with the nostalgia of controlling with your hand but rather with the need to maintain the validity of force in the mind of law and the people.
Last, but not least, it is a democratic issue rather than a technical or military one. Examples of the values that the delegation of lethal authority touches include the values of the people and the international norms, and these should not be delegated to the small bureaucratic silos. Engineers, commanders, lawmakers, ethicists, and civil society should come to the table and devise the guardrails to ensure that AI is not a replacement of human judgment but a means of human control. When we make this architecture sound auditable systems, stratified control, encoded professional restraints, and a mindset of accountability, AI will not replace the human ability without destabilizing the moral accountability that must, in all cases, have remained the domain of humanity.

Dr. Muhammad Fahim Khan
The writer is an assistant professor at the Department of Humanities and Social Sciences, Bahria University, Islamabad.
- This author does not have any more posts.



