Anthropic Secures Massive Google-Broadcom Compute Deal Worth Tens of Billions for Claude Models
Anthropic has finalized a landmark multi-year compute agreement with Google Cloud and Broadcom that will deliver multiple gigawatts of next-generation processing capacity -- including access to as many as one million of Google's custom TPU chips -- to power the company's Claude models starting in 2027, in a deal that underscores the extraordinary scale of infrastructure investment now required to compete at the frontier of advanced computing.
Background
Anthropic, founded in 2021 by former OpenAI researchers including CEO Dario Amodei and President Daniela Amodei, has established itself as one of the leading developers of large-scale language models through its Claude series. The company has raised more than $7 billion from investors including Google, Spark Capital, and Salesforce Ventures, and has secured enterprise contracts with major corporations across financial services, healthcare, and legal sectors.
The compute requirements for training and running frontier-scale models have grown exponentially over the past three years, creating a structural dependency on cloud providers and chip manufacturers that has reshaped the competitive dynamics of the technology industry. Nvidia's H100 and H200 GPUs have been the dominant hardware for this workload, but Google's custom Tensor Processing Units -- now in their eighth generation -- offer a compelling alternative for organizations willing to commit to Google's cloud infrastructure.
Key Developments
The agreement between Anthropic, Google Cloud, and Broadcom covers multiple gigawatts of compute capacity, with the new infrastructure scheduled to come online starting in 2027. Broadcom's role is dual: the company will supply networking components for Google's next-generation data center infrastructure and will facilitate the delivery of Google-designed TPU chips to Anthropic under a separate supply arrangement. Broadcom has also signed a multi-year agreement extending through 2031 to develop and supply Google's custom chips, cementing its position as a critical enabler of Google's infrastructure ambitions.
The scale of the deal -- described by sources familiar with the terms as worth tens of billions of dollars over its duration -- reflects the extraordinary capital requirements of frontier model development. Training a single large-scale model can consume tens of millions of dollars in compute costs, and inference -- running the model to respond to user queries -- requires sustained infrastructure at massive scale. Anthropic's Claude models serve enterprise customers across dozens of industries, with usage growing rapidly as businesses integrate the technology into customer service, legal research, and software development workflows.
Google Cloud CEO Thomas Kurian described the partnership as a cornerstone of Google's strategy to build the infrastructure layer for the next generation of enterprise software. Broadcom CEO Hock Tan highlighted the deal as validation of the company's custom silicon strategy, which has positioned Broadcom as the preferred partner for hyperscalers seeking alternatives to Nvidia's off-the-shelf GPU products.
Why Americans Should Care
The Anthropic-Google-Broadcom deal has concrete implications for American workers and communities. Broadcom's chip design operations are concentrated in San Jose, California, and its manufacturing partners -- primarily TSMC -- are expanding US production capacity at a new facility in Phoenix, Arizona, supported by CHIPS Act funding. The data centers that will house Anthropic's compute capacity are expected to be built primarily in the United States, with likely locations in Virginia's data center corridor, Texas, and the Pacific Northwest -- regions that will see significant construction employment and long-term operational jobs. For American enterprises in financial services, healthcare, and legal sectors that rely on Claude for productivity applications, the expanded compute capacity means faster response times and the ability to handle more complex tasks at scale.
Why It Matters
The scale of the Anthropic-Google-Broadcom agreement reflects a broader structural shift in how technology companies compete. The ability to secure massive compute capacity has become as strategically important as software development talent or intellectual property -- a dynamic that favors well-capitalized incumbents and creates significant barriers to entry for smaller competitors. This mirrors the dynamics of the semiconductor industry in the 1990s, when the capital requirements for chip fabrication consolidated the industry around a handful of players.
The current compute arms race is concentrating power among a small number of cloud providers and chip manufacturers, raising questions about market concentration that antitrust regulators in Washington and Brussels are beginning to examine. The US currently leads in both cloud infrastructure and chip design, but the concentration of manufacturing in Taiwan creates a geopolitical vulnerability that the CHIPS Act is only beginning to address. Broadcom's stock has risen more than 40% year-to-date on the strength of its custom silicon business, reflecting investor confidence that this structural shift will persist for years.
What's Next
Anthropic is expected to release Claude 4, its next major model generation, in the second half of 2026, ahead of the new compute infrastructure coming online. The company has indicated it will use the expanded capacity to train models at scales not previously possible, with a focus on improved reasoning and longer context windows. Broadcom is expected to provide updated revenue guidance at its next earnings call in June, with analysts projecting continued strong growth driven by the custom silicon business.
Sources: Anthropic; Data Center Dynamics; Reuters



