Denise Dresser, OpenAI’s chief revenue officer, recently outlined the company’s strategic blueprint for outmaneuvering rivals, including Anthropic. In a four-page memo dispatched to employees on Sunday, Dresser detailed OpenAI’s imperative to solidify user engagement and significantly expand its enterprise operations.
The memo, reviewed by The Verge, repeatedly stressed the critical need to establish a protective "moat" around its AI offerings. This strategy aims to counteract the prevalent ease with which users can shift between competing models based on daily or weekly performance rankings. Dresser, who has assumed many of former COO Brad Lightcap’s responsibilities as he transitions to special projects, also underscored the strategic importance of focusing on enterprise clients. This aligns with OpenAI’s recent pivot to concentrate on its primary revenue streams, moving away from what it terms “side quests,” as previously reported by CNBC.
“Multi-product adoption makes us harder to replace,” Dresser asserted, further elaborating, “We should stop thinking like a company with separate product lines. We should think like a platform company with multiple entry points and one integrated enterprise offering.”
Dresser also addressed the escalating competition between OpenAI and its long-standing rival, Anthropic, remarking that “the market is as competitive as I have ever seen it.” While acknowledging that Anthropic’s “coding focus gave them an early wedge,” she cautioned that “you do not want to be a single-product company in a platform war.” The memo additionally accused Anthropic of inflating its stated run rate and labeled its failure to acquire sufficient compute as a “strategic misstep.” Both OpenAI and Anthropic are reportedly planning to go public this year.
Regarding Anthropic’s approach, Dresser wrote, “Their story is built on fear, restriction, and the idea that a small group of elites should control AI.”
OpenAI has consistently promoted itself as advocating for “democratic AI,” emphasizing broader access for individuals, often implying that Anthropic’s enterprise-centric model does the opposite. In February, OpenAI CEO Sam Altman commented, “Anthropic serves an expensive product to rich people.”
***
The System That Will Win Enterprise AI
As the second quarter commences, our focus remains firmly on our customers. My recent engagements with leaders across major enterprises, influential startups, and key venture firms reveal a clear message: there’s considerable enthusiasm for our innovations, coupled with a demand for deeper insight into our roadmap to facilitate confident planning and market leadership.
Enterprise AI is progressing into a more mature phase. While raw capability remains important, it is no longer sufficient. Customers now prioritize "fit"—how effectively AI integrates with their workflows, knowledge bases, controls, and daily operations, and its capacity for reliable deployment, trustworthiness, and continuous improvement. They seek a dependable system upon which to build.
We are actively constructing this system: offering superior models for professional use, a robust platform for agents, deep integration with business contexts, and the capability for scalable deployment and enhancement. Customer validation of this direction is evident through the increasing number of multi-year, multi-product, nine-figure deals, and the expansion of existing customer engagements as they standardize on our capabilities across their organizations.
I am immensely proud of our team’s dedication and performance. We are cultivating trust through the depth, quality, and meticulousness of our work. The opportunity ahead is vast, and our current primary constraint is not demand, but capacity. Consequently, talent acquisition remains a top priority for Q2. We will continue to hire strategically, maintain our high standards, and build a team that embodies the excellence our customers expect and we demand of each other.
We possess all the necessary elements to extend our leadership: ample compute resources, innovative products, and strong customer pull. This is our moment to confidently and clearly articulate why OpenAI is the platform enterprises should trust for building, deploying, and scaling their AI initiatives.
Here are five customer-backed priorities we will concentrate on:
1. Win the model layer for work
Enterprises invest in business outcomes. They pay for models that empower employees to write faster, analyze more effectively, code more productively, enhance customer support, and make superior decisions. Ultimately, they seek higher revenue per employee, accelerated cycle times, reduced support costs, and improved execution.
The "Spud" model represents a significant stride in establishing the intelligence foundation for the next generation of work. Early customer feedback has been overwhelmingly positive. Spud is not only our most intelligent model to date but also delivers on crucial aspects for high-value professional tasks: stronger reasoning, enhanced understanding of intent and dependencies, improved follow-through, and more reliable output in production.
Superior model performance elevates the entire technology stack. Spud is poised to significantly enhance all our key products. It will expand the range of workflows we can manage and provide customers with another compelling reason to consolidate their operations around us. This exemplifies our iterative deployment strategy in practice: pushing the frontier, deploying it into real products, learning from actual usage, and compounding those lessons into better systems on the path to becoming a super app.
Our compute advantage positions us to deliver continuous leaps in capability. Customers are already experiencing this in tangible product terms: higher token limits, lower latency, and more dependable execution of complex workflows. Each advancement in compute enables us to train more robust models, satisfy greater demand, and reduce the cost per unit of intelligence, providing a durable business advantage.
2. Win the agent platform layer
The market has evolved from simple prompts to sophisticated agents, presenting a monumental opportunity for us.
Customers require systems that can reason, utilize tools, operate across diverse workflows, and perform reliably within authentic business environments. This necessitates robust orchestration, control, observability, security, integration, and governance capabilities.
Our "Frontier" platform allows us to dominate this agent platform layer. We must position Frontier as the default platform for enterprise agents—the core intelligence layer that enterprises leverage to build, deploy, manage, and scale their AI systems.
This is where our advantage can truly compound. Frontier directly links model intelligence to agent performance. As our models improve, the platform’s value increases. As the platform becomes more deeply embedded, switching costs for customers rise. As customers process more workflows through our system, OpenAI becomes increasingly indispensable and central to how work is accomplished.
This strategy is how we will transition from being merely a product vendor to becoming essential operating infrastructure.
3. Expand the market through Amazon
Our partnership with Microsoft has been fundamental to our success. However, it has also inherently limited our ability to engage enterprises where they currently operate, which for many is within the Amazon Bedrock ecosystem.
Since announcing our partnership with Amazon at the end of February, the inbound demand from our customers for this offering has been frankly staggering. We are working at full throttle to establish this as a scaled distribution channel.
The Amazon Stateful Runtime Environment is crucial because it simultaneously expands access and upgrades the product experience. By enabling memory, context, and continuity across interactions, we move beyond stateless model access towards systems that can operate reliably over time and across complex business processes.
This initiative will expand our market in three key ways: First, it reduces adoption friction for AWS-native customers. Second, it strengthens our position with regulated and security-sensitive buyers by operating within their existing AWS environments and governance models. Third, it further integrates our platform, from model access to production runtime, for long-running, multi-step agents.
4. Sell the full AI-native stack
Customers desire a cohesive platform, not disparate point solutions. This is precisely what we offer: ChatGPT for Work serves as the entry point for knowledge workers, Codex provides the system for software and agentic development, the API acts as the engine for embedded intelligence within customer products and workflows, Frontier is our agent platform, and the Amazon runtime extends our reach into production-grade, stateful execution.
This breadth represents a significant strategic advantage because customers do not all commence their AI journey from the same starting point. Some begin with employee-facing tools, others with developer solutions, internal systems, or external products. Our objective is to engage them at their chosen entry point and then facilitate their expansion across our entire stack.
This is the flywheel we are building: superior models drive increased usage, greater usage leads to deeper integration, deeper integration fosters multi-product adoption, and multi-product adoption makes us more difficult to replace.
We must cease thinking of ourselves as a company with separate product lines. Instead, we should envision ourselves as a platform company offering multiple entry points and a single, integrated enterprise solution.
The most significant bottleneck in enterprise AI is no longer whether the technology functions, but rather whether companies can successfully deploy it at scale.
“DeployCo” presents us with an opportunity to convert product demand into repeatable enterprise transformation. It will serve as a deployment engine, helping companies to demonstrate value more rapidly, mitigate risk, and scale adoption throughout their organizations.
This initiative can become a force multiplier across all our other endeavors. It accelerates customer progress, sharpens our feedback loops, surfaces repeatable deployment patterns, and simultaneously enhances product development, sales, and customer success. Moreover, alongside our Frontier Alliance partners, it provides a credible path to scale execution across the market.
The companies that ultimately dominate enterprise AI will not merely possess the best models. They will also demonstrate the superior ability to deploy those models into real workflows, within real organizations, delivering measurable value. We must strive to be the best in the world at this.
A note on the competitive landscape
The market is experiencing the most intense competition I have ever witnessed. I believe this is ultimately a positive development, signifying the immense and critical nature of the opportunity. However, it can undoubtedly be noisy, volatile, and distracting at times. Competition inspires us, will make us all better, and most importantly, our customers will reap the benefits. To that end, as you have heard me say many times, our paramount focus should be spending time with our customers. When we engage with our customers, actively listening to their challenges and ambitions, and concentrating on how we can invest in and support them, everything else quiets down and comes into sharp focus.
With that in mind, here are a few points worth considering, particularly regarding Anthropic:
- Their narrative is founded on fear, restriction, and the notion that a select group of elites should control AI. Our positive message—to build powerful systems, implement appropriate safeguards, expand access, and empower people to achieve more—will ultimately prevail.
- Their strategic misstep in failing to acquire sufficient compute is now manifesting in their product. Customers are experiencing this through throttling, reduced availability, and a less reliable experience. We recognized the exponential compute curve earlier, acted on it more swiftly, and now possess a substantial structural advantage.
- While their coding focus provided an early advantage, you do not want to be a single-product company in a platform war. As AI extends beyond developers into every team, workflow, and industry, that narrow specialization can become a significant liability.
- Their reported run rate is inflated. They employ accounting treatments that make revenue appear larger than it is, including grossing up revenue share with Amazon and Google. Our analysis indicates this overstates their run rate by approximately $8 billion (based on their stated $30 billion). We report Microsoft revenue share net, which is more aligned with the standards we would adhere to as a public company.
Finally, one of the most rewarding aspects of our work is the people we collaborate with. I am incredibly proud of this company and our team. It is a privilege to work alongside all of you and to be at the epicenter of the future during this pivotal moment. Let us all maintain our focus, operate as one cohesive team, strive for the highest level of excellence, and pull together in the same direction.
The market is ours to win; let’s execute accordingly.
The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.