Throughout history, identifying and engaging targets during warfare has been a time-consuming process. During much of the Cold War, locating and prosecuting targets beyond the immediate frontline often took days. Technological advancements gradually shortened this timeline, and by the Gulf War of the early 1990s, targets could be processed and engaged within hours. By the early 2000s, targeting systems had improved significantly, reducing the kill chain—the process from identifying a target to executing a strike—to as little as 10-15 minutes.
Despite these advancements, the scale of simultaneous targeting remained a challenge. For instance, Operation Iraqi Freedom in 2003 required a dedicated targeting center called the Combined Air Operation Center (CAOC) staffed by over 1,000 personnel to process hundreds of targets simultaneously. The process, while effective for its time, was resource-intensive and unable to scale efficiently for future conflicts. This realization spurred the development of more advanced solutions.
To compress kill-chain, the U.S. Department of Defense (DoD) launched Project Maven in 2017, an initiative designed to harness artificial intelligence (AI) for military targeting. Unlike traditional systems, Maven uses sophisticated algorithms to analyze vast amounts of reconnaissance data, including video feeds, satellite imagery, and radar signals. The system can identify and categorize objects, distinguishing between tanks, trucks, radars, and other systems with remarkable accuracy. By integrating this data into battlefield command interfaces, Maven not only identifies targets but also recommends optimal strategies for engagement.
Maven’s functionality is akin to mass facial recognition software. Just as such systems can analyze crowds at airports to identify a certain individual, Maven processes streams of battlefield data to locate potential threats. This capability relies on a vast database of reference points collected from diverse environments, enabling the system to interpret incomplete or imperfect imagery and assign probabilities to its findings. Human operators remain in the loop, confirming Maven’s suggestions before executing strikes.
The initial versions of Project Maven were tested in controlled environments, where it demonstrated the ability to significantly reduce the time needed to process and engage targets. As the technology matured, it was deployed in real-world operations, including identifying targets in Iraq, Syria and Yemen. During the Russia-Ukraine war, Maven played a pivotal role in processing satellite imagery and relaying intelligence to Ukrainian forces.
One of Maven’s most significant advancements is its integration with ground-moving target indicator (GMTI) satellites. These satellites use radar to detect movement, even through clouds or at night, and can track targets continuously. This capability allows Maven to overcome the limitations of traditional optical systems, ensuring uninterrupted surveillance and identification.
Despite its impressive capabilities, Maven has its own limitations. A simplified version of Maven has displayed mixed results in the Russia-Ukraine war. Environmental conditions, such as snow or dense foliage, can hinder its ability to accurately identify targets. In some cases, the system has mistaken vehicles for trees or struggled to differentiate between real and inflatable decoys. Human analysts still outperform Maven in certain scenarios, achieving higher accuracy rates in complex environments. For example, in desert terrain where the landscape changes abruptly according to weather conditions, Maven’s accuracy can drop significantly. Another challenge lies in prioritizing targets and recommending appropriate weapon systems. While Maven excels at identification, it is less effective at determining the optimal sequence of attacks or selecting the best weapons for specific targets. These shortcomings highlight the need for further refinement and training of the AI system.
The rapid pace of AI development suggests that the limitations of Maven are likely to diminish over time. With continuous improvements and increased data inputs, future iterations of Maven are expected to surpass current capabilities, potentially revolutionizing warfare even further. The implications of this technology are profound. AI systems like Maven could enable militaries to conduct operations at an unprecedented scale, identifying and prosecuting thousands of targets simultaneously. This capability would be especially critical in large-scale conflicts, where speed and volume of targeting could determine the outcome. For example, a single AI-assisted targeting unit could perform the workload of hundreds of traditional targeting personnel, freeing up human resources for other critical tasks.
The United States is not alone in its pursuit of AI-assisted warfare. China is also developing similar militarized AI technologies, utilizing its expertise in facial recognition and object detection. While the U.S. currently holds an edge in satellite capabilities and data processing, China’s rapid advancements in AI, satellite systems, and its growing investment in military technology indicate that the competition will intensify in future.
The race to integrate AI into military operations raises ethical and legal questions. As systems like Maven become more autonomous, there is growing concern about the potential for AI to make life-and-death decisions. For now, U.S. officials have stated that human operators will retain control over the decision to fire weapons. However, in the high-pressure environment of a large-scale war, there may be a push to further automate the kill chain to gain a competitive edge.
AI-assisted targeting represents a transformative shift in military strategy, offering unprecedented speed and efficiency. Systems like Project Maven are already reshaping the battlefield, enabling rapid identification and engagement of targets on a massive scale. While challenges remain, the trajectory of AI development suggests that these systems will become increasingly reliable and capable. However, this technological revolution also comes with risks. The potential for misuse, errors, and escalation underscores the need for responsible development and deployment of AI in warfare. As nations race to perfect their systems, the balance between innovation and caution will be critical. Ultimately, while AI may enhance military capabilities, the pursuit of peace remains the only true path to security and stability.
Ahmad Ibrahim
Author is Research Associate at Pakistan Navy War College, Lahore.
- Ahmad Ibrahim#molongui-disabled-link
- Ahmad Ibrahim#molongui-disabled-link
- Ahmad Ibrahim#molongui-disabled-link
- Ahmad Ibrahim#molongui-disabled-link