Anthropic's annualised revenue run rate has surpassed $30 billion, up from roughly $9 billion at the end of 2025. Alongside that disclosure, the company announced a deal with Google and Broadcom for 3.5 gigawatts of next-generation TPU capacity starting in 2027 — the single largest compute commitment any AI startup has made to date.
These aren't abstract infrastructure numbers. They signal that enterprise demand for AI tools is scaling at a pace that will reshape pricing, availability, and capability for every business that depends on them. If you're using Claude, or any frontier AI model, the economics underpinning your tools are shifting fast.
What actually happened
On 6 April, Anthropic published a blog post confirming the expanded partnership. The same day, Broadcom filed a securities disclosure with the SEC detailing two linked agreements: a long-term deal to develop and supply future generations of Google's TPUs through 2031, and a supply assurance agreement routing 3.5 gigawatts of that TPU capacity to Anthropic starting in 2027.
"This groundbreaking partnership with Google and Broadcom is a continuation of our disciplined approach to scaling infrastructure: we are building the capacity necessary to serve the exponential growth we have seen in our customer base," said Krishna Rao, CFO of Anthropic.
The 3.5 gigawatts is in addition to 1 gigawatt already coming online this year under a Google Cloud agreement announced last October. To put the scale in perspective: 3.5 gigawatts is roughly the output of three large nuclear power stations, dedicated entirely to running AI models.
The vast majority of the new infrastructure will be sited in the United States, extending Anthropic's $50 billion American AI infrastructure commitment from November 2025. Amazon Web Services remains Anthropic's primary cloud and training partner through Project Rainier, and Claude remains the only frontier AI model available on all three major cloud platforms: AWS Bedrock, Google Cloud Vertex AI, and Microsoft Azure Foundry.
The revenue story is the real headline
The compute deal is massive, but the revenue trajectory is what matters most for understanding where enterprise AI is heading.
Anthropic's $30 billion run rate means it has tripled revenue in approximately four months. The company disclosed that more than 1,000 business customers now spend at least $1 million annually on Claude — double the 500 reported during its Series G fundraising in February. Enterprise products drive approximately 80% of total revenue, according to Sherwood News.
A significant chunk of that enterprise adoption is being driven by Claude Code, Anthropic's code-generation tool, which has reached $2.5 billion in annualised recurring revenue — up from $1 billion in Q4 2025. That's a developer tool generating more revenue in 18 months than most enterprise software companies achieve in a decade.
Meanwhile, Epoch AI projects that Anthropic could surpass OpenAI in annualised revenue by mid-2026. OpenAI's run rate actually declined from $25 billion to $24 billion in early 2026, while Anthropic accelerated. The competitive dynamics of the AI industry are shifting in real time.
The risk Broadcom flagged — and why it matters
Buried in Broadcom's SEC filing is a notable caveat: "The consumption of such expanded AI compute capacity by Anthropic is dependent on Anthropic's continued commercial success."
As The Register's analysis pointed out, that language exists because the financial arrangements required to deploy 3.5 gigawatts of custom silicon represent genuine risk. This isn't a pre-paid deal — it's a bet on continued exponential growth. If enterprise demand stalls, the infrastructure commitments become liabilities.
Anthropic also faces a legal dispute with the US government after the Pentagon classified it as a "supply chain risk" over its safety guardrails. More than 100 businesses have reportedly contacted Anthropic expressing doubt over their ability to continue working with the company. Anthropic Chief Commercial Officer Paul Smith told Bloomberg that some customers respected that the company "demonstrates its principles," but the situation remains unresolved.
For business owners, this is a useful reminder: the AI tools you're building workflows around exist within a volatile competitive and regulatory environment. Vendor concentration carries real risk.
What this means for your business
If you're running a company that uses Claude — or any frontier AI model — three things follow from this news.
First, AI tool pricing is likely headed down, not up. The sheer scale of compute being deployed means capacity is growing faster than even the surging demand. Anthropic, OpenAI, and Google are in an infrastructure arms race, and the beneficiary of arms races is usually the customer. Expect enterprise AI pricing to continue falling through 2026 and 2027.
Second, the enterprise AI market is consolidating around a few providers. Anthropic's 1,000-plus million-dollar customers aren't experimenting — they're committing. Deloitte has deployed Claude across 470,000 employees. JPMorgan Chase uses it across 80 trading desks. When organisations of that scale lock in, switching costs climb. If you haven't evaluated which AI provider fits your workflows, the window for easy comparison is narrowing.
Third, the code-generation wave is real and accelerating. Claude Code's $2.5 billion run rate isn't just an Anthropic story — it's evidence that AI-assisted development has crossed from novelty to core enterprise tooling. If your development team hasn't trialled AI coding assistants, you're now behind the curve rather than ahead of it.
What to watch
The next milestone is whether Anthropic actually surpasses OpenAI in revenue by mid-year, as Epoch AI projects. If it does, the narrative shifts from "OpenAI leads, everyone else chases" to a genuine multi-player race — which is better for businesses, because competition drives down cost and drives up capability.
Watch also for whether the Broadcom risk caveat proves prescient. A 3.5-gigawatt compute commitment predicated on continued commercial success is a bold bet. If enterprise adoption plateaus — due to regulation, economic slowdown, or disillusionment — those infrastructure deals become very expensive very quickly.
And for Australian businesses specifically: Anthropic recently signed an MOU with the Australian government on AI safety and research. That's a signal that Claude's enterprise presence in this market is set to deepen. The tools are here. The question is whether your organisation is positioned to use them.
Sources
- Anthropic expands partnership with Google and Broadcom for multiple gigawatts of next-generation compute — Anthropic
- Anthropic reveals $30bn run rate and plans to use 3.5GW of new Google AI chips — The Register
- Broadcom to supply Anthropic with 3.5 gigawatts of Google TPU capacity from 2027 — Tom's Hardware
- Anthropic taps Google and Broadcom for yet more AI chips as revenue run rate tops $30B — SiliconANGLE
- Anthropic could surpass OpenAI in annualized revenue by mid-2026 — Epoch AI
- Claude Code's $2.5B ARR: What the Revenue Milestone Really Means for Builders — Context Studios
