A new book critically examines how a contentious collaboration with Silicon Valley has dramatically escalated the speed of modern warfare.
During the initial 24 hours of military operations against Iran, the United States armed forces engaged over 1,000 targets, a scale nearly twice that of the "shock and awe" campaign in Iraq two decades prior. This rapid acceleration was facilitated by advanced AI systems designed to streamline the targeting process, with the Maven Smart System being a primary example.
Journalist Katrina Manson, in her new book, *Project Maven: A Marine Colonel, His Team, and the Dawn of AI Warfare*, delves into Maven's evolution. Initiated in 2017 as an experimental effort to apply computer vision to drone footage, the project faced significant employee protests at Google, its initial military contractor, leading to the company's withdrawal. Driven forward by Marine intelligence officer Drew Cukor, whose narrative forms the core of *Project Maven*, the system was ultimately developed by Palantir, leveraging technologies from Microsoft, Amazon, Anthropic, and other firms. Now integrated across US armed forces and recently acquired by NATO, Maven aggregates data from satellite imagery, radar, social media, and numerous other sources to pinpoint and target entities on the battlefield, significantly accelerating what is known as the "kill chain."
Maven integrates computer vision with a sophisticated workflow management system, enabling it to identify targets, pair them with appropriate weaponry, and allow users to swiftly navigate the remaining steps of a targeting cycle. A process that once consumed hours can now be accomplished in mere seconds. An official cited by Manson reveals that this technology has boosted the US capability from striking fewer than a hundred targets daily to a thousand, with the subsequent integration of Large Language Models (LLMs) further increasing this capacity to up to five thousand targets per day.
Tragically, among the thousand targets struck on the first day of the Iran conflict was a girls’ school, resulting in over 150 fatalities, predominantly children. Although the school had previously functioned as part of an Iranian naval base, it was publicly listed online as a school, and satellite imagery clearly showed playgrounds. While initial media reports often speculated about potential "hallucinations" by the AI model Claude, technology historian Kevin Baker argued in *The Guardian* that the more pertinent issue was Maven and the acceleration it enabled. Baker stated, “A chatbot did not kill those children. People failed to update a database, and other people built a system fast enough to make that failure lethal.”
The trajectory suggests a continued acceleration in the speed of warfare. Manson's research uncovers military initiatives focused on developing fully autonomous weapons, including explosive-laden drone Jet Skis, designed to independently identify and neutralize targets.
I recently spoke with Katrina Manson to discuss Project Maven and the transformative impact of AI on military operations. This interview has been condensed and edited for clarity.
Colonel Cukor emerged as an early and staunch advocate for AI in military applications. His motivations stemmed from deep frustration with the inadequate intelligence tools available to US military personnel in Afghanistan. He observed that the US effectively re-fought the war repeatedly due to poor information transfer between rotating troops. Cukor was dismayed by the reliance on basic tools like Excel and PowerPoint for data management and envisioned an advanced analytic platform that could deliver critical intelligence directly to frontline operators. A central element of his vision was the concept of "white dots"—interactive map markers infused with comprehensive intelligence, including coordinates, elevations, and known details about a location, which became a driving force behind Project Maven.
Project Maven originated in 2017 as an existing military initiative with established funding, initially focused on applying AI to satellite imagery. It was subsequently repurposed to analyze drone video footage. This strategic shift was prompted by US considerations for developing AI technologies in anticipation of potential conflicts with China, driven by the belief that future warfare would unfold too rapidly for human cognition alone. Colonel Cukor’s initial proposal concentrated on utilizing AI to analyze drone video, addressing the fact that human analysts could process as little as 4 percent of collected footage. The aim was for AI to effectively replace human observation in analyzing visual data, though the project's scope was always intended to be broader.
Public awareness of Maven first emerged with the Google employee protests in 2018. At the time, Google publicly asserted that the technology would not be used for lethal purposes. However, Manson's reporting contradicts this, indicating that targeting was always the underlying intention. While a Google spokesperson stated that AI-assisted flagging of drone feed images was meant to save lives and was for "non-offensive" uses, Manson's investigation reveals that while many US military operators were indeed motivated by a desire to save American lives and minimize civilian harm—which could be considered "non-offensive" in the context of intelligence analysis—the broader and very real purpose was AI-driven target selection for offensive operations. When asked if offensive weapon strikes were intended to be part of Project Maven, an interviewee for the book responded, “yeah, of course, it’s not like we’re doing it for kicks. The goal of the intel is to take out high-value targets.”
Following Google's withdrawal, Palantir stepped in. Simultaneously, Microsoft and Amazon Web Services (AWS) significantly expanded their roles in algorithm development and computational support. Colonel Cukor approached Palantir with his "white dots" concept, outlining a ten-year vision for transforming the US military. At this stage, existing algorithms were largely ineffective, and systems were ill-suited for their purpose, leading to user skepticism about AI and distracting displays. Cukor sought Palantir's expertise to develop a user-friendly interface. Palantir was initially reluctant, reportedly skeptical about the future of AI and preferring to focus on data crunching rather than just interface design. However, Cukor proved highly persuasive, even advising Palantir on how to enhance its reputation within the Department of Defense to secure contracts. While these initial contracts were not highly lucrative, Manson reports that nearly a decade later, the Maven Smart System is set to become a "program of record" by the end of September, with Palantir as the prime contractor, signaling substantial future profitability.
The conflict in Ukraine marked a pivotal moment in the development and practical application of these AI systems, making the direct link between intelligence and operational execution much more explicit. Even before the full-scale Russian invasion, the US 18th Airborne Corps, stationed in Wiesbaden, Germany, rapidly began employing computer vision via the Maven Smart System to track Russian positions and equipment. Initially, the algorithms, trained on desert environments from the Middle East and Afghanistan, struggled to recognize tanks and other features in snowy conditions. To address this, new satellite footage of Russian assets was quickly collected and sent back to the US for rapid algorithm retraining, significantly improving their accuracy in identifying targets. The US then began transmitting what they termed “points of interest” to Ukrainian forces, which were subsequently used to target Russian equipment and personnel. This specific terminology was a deliberate effort by the US to provide support without being perceived by Russia as a direct belligerent in the conflict, distinguishing "points of interest" from "targets" that had undergone a full engagement process. Manson's reporting indicates that at its peak on a single day in 2022, the US relayed 267 such points of interest to Ukraine.
While the US military maintains that no part of the targeting process is "yet automated" due to the critical legal decision required to authorize a strike, the acceleration of the "kill chain" stems from digitizing and streamlining traditionally analog and slow permission-granting procedures, which often involved manual communication methods. The 18th Airborne Corps, for instance, initially involved human operators at six key stages of targeting, from assessing operational approaches and collected data to deciding, communicating, and executing fire, and finally reporting outcomes. With Maven’s AI, the human role has been condensed to just two points: the decision to act and the action itself. Humans now supervise machine decisions during automated data collection, but all subsequent assessments are AI-enabled. Even the National Geospatial-Intelligence Agency (NGA) is now producing intelligence reports entirely generated by AI, untouched by human eyes or hands, signifying a profound shift towards data and system primacy. The ability to engage numerous targets daily is further enhanced by Maven Smart System's integration of large language models, including Anthropic's Claude, which Manson reports is accelerating processes. US Central Command (Centcom) itself has confirmed that AI has reduced processes that once took days or hours to mere seconds. While the US asserts that commanders still make the ultimate decision, military ethicists have voiced concerns about the "gamification of war" and the potential for operators to over-rely on AI-generated targets without fully comprehending the underlying data. Conversely, proponents argue that this AI-based system, functioning as an advanced database, provides unprecedentedly well-tagged and auditable data, offering headquarters greater transparency and accountability into frontline operations. The extensive US operation in Iran is expected to serve as a critical case study for assessing this platform's data and accountability.
Kevin Baker, the technology scholar, highlighted that while Claude initially received significant blame for the school strike in Iran, the broader issue was the long-term acceleration enabled by AI, which may have eliminated crucial time for deliberation, error detection, or reconciling contradictory intelligence. Internally, the US military is engaged in a significant debate regarding the extent to which they should embrace this acceleration. While some view it as inevitable, others strongly caution that last-minute human assessment remains vital for saving lives. Although these debates are ongoing, the clear trend indicates that the Maven Smart System is becoming a permanent "program of record." Centcom commanders publicly affirm AI's utility even amidst ongoing operations, yet figures like retired Defense Secretary Jim Mattis caution that "targeting is no substitute for strategy," implying that merely hitting many targets does not guarantee victory. A pertinent historical example is the 1999 US strike on the Chinese Embassy in Belgrade, where public analysis revealed the embassy was incorrectly labeled on some maps due to a recent relocation, and attempts to verify the target could not be completed in time. In such scenarios, AI systems present a duality: while digital connectivity could facilitate easier flagging of anomalies and potential errors, an erroneous targeting database could also lead to even quicker target selection without adequate human checks. Therefore, the efficacy of the US military's embrace of AI in the targeting cycle will ultimately depend on the quality and reliability of the data feeding these systems.
The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.