At a U.S. military installation in central California, four-seater all-terrain vehicles navigate challenging hillside trails. This activity is not a conventional training exercise for the human operators; rather, it is a focused effort to train advanced AI models for deployment in dynamic conflict zones.
These autonomous military ATVs are managed by Scout AI, an innovative startup co-founded in 2024 by Coby Adcock and Collin Otis, which identifies itself as a "frontier lab for defense." The company recently announced a successful $100 million Series A funding round, spearheaded by Align Ventures and Draper Associates, building on its earlier $15 million seed round secured in January 2025.
Scout extended an exclusive invitation to TechCrunch for an in-depth look at its training operations, conducted at a military base that remains undisclosed at the company's request.
The core of Scout's mission is the development of an AI model named "Fury," designed to operate and command military assets. Initially, Fury will focus on logistical support, with a future trajectory towards autonomous weapon systems. Collin Otis, the company's CTO, draws a parallel between this work, which leverages existing large language models (LLMs), and the rigorous training of human soldiers.
Otis elaborated on this approach to TechCrunch, stating, “They start when they’re 18 years old, and sometimes they even start after college, so you want to start with that base level of intelligence. It’s useful to start with someone who’s already made an investment and then say, hey, what do I have to do to teach this thing to be an incredible military AGI, versus just being a broadly intelligent AGI?”
Scout has already secured military technology development contracts totaling $11 million from prestigious organizations such as DARPA, the Army Applications Laboratory, and other Department of Defense clients. The company stands among 20 autonomy firms whose technology is currently being utilized by the U.S. Army’s 1st Cavalry Division during its routine training cycles at Ft. Hood in Texas. The expectation is that the unit will integrate proven Scout products into its next deployment in 2027.
For Scout’s rigorous internal testing, real-world conditions are paramount, with vehicles put through their paces on the base's demanding hilly terrain. Here, the company’s operations team, comprised of former soldiers, meticulously runs the vehicles through simulated missions.
While autonomous vehicles are increasingly visible in urban environments globally, they typically operate within structured settings governed by clear rules. Autonomous operation on unmarked trails or off-road presents a significantly greater challenge. Otis, previously an executive at autonomous trucking company Kodiak, revealed that his motivation to establish Scout stemmed from realizing that the systems he helped develop there lacked the intelligence required for unpredictable war zones.
Scout is embracing a cutting-edge autonomy technology known as Vision Language Action models, or VLAs. These models are rooted in LLMs and specifically designed for robotic control. VLAs, initially introduced by Google DeepMind in 2023, have since catalyzed the formation of several robotics startups, including Physical Intelligence and Figure.AI, the humanoid robot company led by Adcock’s brother, Brett.
Coby Adcock, a board member at Figure.AI, credits his experience there with convincing him of the immense potential to infuse greater intelligence into the military's expanding fleet of autonomous vehicles. His brother introduced him to Otis, who was then advising Figure, leading to their collaboration on applying advanced AI to military solutions.
Otis illustrated the transformative power of VLAs, explaining, “If I handed you the controller of a drone right now and I strapped a headset on you, you could learn to fly that thing in minutes. You’re actually just learning how to connect your prior knowledge to these couple little joysticks. It’s not a big leap. That’s the way to think about VLAs and why they’re such an unlock.”
Indeed, this reporter had the opportunity to operate one of Scout’s ATVs on the rugged trails, encountering challenging conditions including steep inclines, loose sand on turns, vanishing tracks, and confusing intersections. Despite not being an experienced ATV driver, I managed a respectable attempt. This demonstrates the type of general intelligence the company aims to embed in its models, which have been trained on these ATVs for only six weeks, following an initial phase with civilian vehicles.
Experiencing the ATV under autonomous control revealed distinct differences; it accelerated more rapidly than a human driver, who might prioritize passenger comfort. The operations team highlighted how the vehicles consistently kept to the right on wider paths but maintained a central position on narrow ones, mirroring the behavior of their human trainers. When encountering uncertainty, the vehicles would abruptly slow down to deliberate on their next move—an occurrence noted a few times during a 6.5 km loop before returning to base.
While VLAs are sufficiently nascent that no company has yet deployed them in an operational setting, Stuart Young, a former DARPA program manager specializing in ground vehicle autonomy, asserted that “the technology is good enough to be doing that experimentation in the field with soldiers to figure out how to most be effective to US forces.” Furthermore, like other autonomy companies, Scout's comprehensive autonomy stack integrates deterministic systems and various other AI approaches to fully equip its agents.
Young recently departed DARPA to join Field AI, having previously managed the RACER program. This initiative challenged companies to develop high-speed, autonomous off-road vehicles, akin to how DARPA's Grand Challenge propelled self-driving cars. Two companies in this domain, Field AI and Overland AI, emerged directly from the RACER program, with Scout joining as a later participant.
According to Scout executives and military technologists, the initial applications of ground autonomy will center on automated resupply. This includes transporting water or ammunition to remote observation posts, or forming convoys where a crewed truck might lead six to ten autonomous vehicles, thereby conserving invaluable human labor for more critical tasks. Brian Mathwich, an active-duty infantry officer serving as a military fellow at Scout, recounted a recent exercise in Alaska where he led a resupply convoy in complete darkness, wishing for the assistance of autonomous vehicles.
Scout primarily defines itself as a software company, dedicated to developing an intelligence layer for military machinery. Its strategic focus is not on manufacturing autonomous vehicles, but rather on building advanced AI capabilities that integrate seamlessly with existing platforms.
Adcock anticipates that the startup’s first widely adopted product will be "Ox," its command and control software. This software, bundled with ruggedized computer hardware (including GPUs, communication systems, and cameras), is designed to empower individual soldiers to orchestrate multiple drones and autonomous ground vehicles using intuitive, prompt-like commands, such as: “Go to this waypoint and watch for enemy forces.”
However, the effective functioning of this software necessitates extensive training on actual vehicles. This is where "Foundry," the company’s designated training range at the military base, comes into play. Here, drivers undertake eight-hour shifts, pushing ATVs through various challenges. They then engage with a reinforcement learning system to log instances where human intervention was required, data which is subsequently used to refine the AI model. The base commander has even requested that Scout’s ATVs participate in security patrols.
One key hypothesis Scout is currently testing is whether VLAs, utilizing a relatively limited dataset combined with simulation training, can produce a fully capable driving agent. While the vehicle demonstrates proficiency on established trails, it is not yet prepared for full off-road operation.
Scout is also actively experimenting with drones for both reconnaissance and weaponized roles, imbuing them with intelligence through vision language models, a sophisticated multi-modal LLM variant.
Scout is developing a system where groups of munition drones would operate in conjunction with a larger "quarterback" platform. This central platform would provide enhanced computational resources to command the swarm. In a typical mission scenario, these drones could autonomously search a designated geographic area for hidden enemy tanks and engage them, potentially without human intervention. Otis argues that the conventional alternative in such situations, indirect artillery fire, is inherently less precise compared to targeted drone strikes.
While autonomous weapons remain a contentious issue within defense technology policy, experts note that the underlying concept is not new, citing heat-seeking missiles and mines that have been in use for decades. Jay Adams, a retired U.S. Army Captain who leads Scout’s operations team, informed TechCrunch that for technologists, the crucial question revolves around how these weapons are controlled.
Adams highlighted that Scout’s munitions drones can be programmed to engage threats only within specified geographic areas or solely with explicit human confirmation. He also pointed out that autonomous weapon platforms are unlikely to act based on fear, a potential factor in the decisions of an eighteen-year-old soldier.
VLAs also hold considerable promise for enhancing targeting accuracy. Scout states that its models are meticulously pretrained on specialized military data to prepare them for specific scenarios, such as encountering an enemy tank during a resupply mission. Lt. Col Nick Rinaldi, who oversees Scout’s initiatives for the Army Applications Laboratory, acknowledges that while automated targeting remains challenging and is unlikely to see widespread use outside of constrained environments in the near term, the inherent ability of VLAs to reason about threats positions them as a highly promising technology for further investigation.
Adams asserts that the capacity for drones to autonomously identify their own targets is fundamental to future warfare. While Russia’s invasion of Ukraine has significantly amplified interest in drone warfare, he believes that reliance on humans operating individual UAVs lacks the necessary scalability for the U.S. to effectively counter a large number of low-cost unmanned systems should they pose a threat to U.S. forces.
Like many defense startups, Scout is transparent about its mission, with executives openly criticizing companies that hesitate to offer their technology to the government. For instance, Google reportedly withdrew from a Pentagon competition to develop control systems for autonomous drone swarms, a capability that Scout is actively pursuing.
“The AI people don’t want to work with the military,” Otis frankly told TechCrunch, referencing Anthropic’s public disagreement with the Pentagon over its terms of service. He added, “None of them are open to running agents on one-way attack drones, or running agents on missile systems.”
Nevertheless, Scout is currently leveraging existing LLMs as the foundational layer for its agents, though the specific models were not disclosed. Otis confirmed that the company has agreements with “very well known hyperscalers” to source the pretrained intelligence for Scout’s core foundation model. He refrained from commenting on whether Scout utilizes open-weight models, including those from Chinese companies. It is common for many companies relying on AI inference to build upon these models due to their cost efficiency compared to models from leading frontier labs like Anthropic or OpenAI.
Scout anticipates addressing this by developing its own proprietary model from the ground up in the coming years, with the founders indicating that a substantial portion of its capital will be allocated to the associated training and compute costs. Otis muses whether Scout might even surpass existing leaders in achieving AGI, attributing this potential advantage to its model's continuous interaction with the real world.
Otis articulated a prevailing sentiment in the AGI community: “There’s an argument in the AGI community along the lines that you can only get so intelligent by reading the internet, and most intelligence comes with interacting in the world.”
When asked if this implied competition with his brother’s army of humanoid robots at Figure, Otis responded negatively. He clarified, “we can get to scale much faster because our customer has assets,” referring to the vast resources and infrastructure available through the Pentagon.
The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.