This article takes a closer look at Broadcom’s long-term collaboration with Google to develop and supply custom Tensor Processing Units (TPUs) and networking components for Google’s next-generation AI racks through 2031. It also touches on Broadcom’s upgraded partnership with Anthropic, which expands TPU-based compute capacity.
These deals land in the context of Broadcom’s strong quarterly results, the shifting AI hardware landscape, and the supply-chain challenges shaping the sector.
Broadcom-Google deal: Deepening the AI infrastructure partnership
The agreement puts Broadcom at the center as a core supplier of custom TPUs and networking hardware for Google’s growing AI rack deployments. Both sides have committed to this partnership through 2031, and there’s now a supply assurance component for critical rack parts—clearly, both want steady, scalable AI infrastructure.
With this deal, Broadcom grows its presence in Google’s AI stack. The partnership aligns chip, network, and rack hardware under one roof, supporting Google’s push to boost AI services across Gemini, Search, Photos, Maps, and more by using optimized silicon and high-bandwidth interconnects.
Scope and commitments
Key elements of the deal include:
- Custom TPUs developed and supplied for Google’s next-generation AI racks
- Networking components integrated to support high-throughput AI workloads
- Supply assurance for critical rack hardware to minimize outages
- Term through 2031 for long-term planning and investment certainty
Anthropic expansion: scaling Claude with Broadcom’s compute backbone
Broadcom and Google have also expanded their collaboration with Anthropic, the maker of Claude. Starting in 2027, Anthropic will get access to about 3.5 gigawatts of TPU-based AI compute capacity through Broadcom, which marks a huge leap in AI model training and inference for the company.
Anthropic says this will help them scale quickly to meet more customer demand and advance the Claude family of models. They’re planning major investments in U.S. computing infrastructure to keep up with growth.
Compute capacity, deployment, and investment plans
Highlights include:
- Claude running across a mix of hardware, including AWS Trainium, NVIDIA GPUs, and Google TPUs
- From 2027, access to hundreds of megawatts of AI-dedicated compute capacity via Broadcom’s TPU ecosystem
- Anthropic’s plan to invest $50 billion in U.S. computing infrastructure
Anthropic also reported a run-rate revenue above $30 billion. That’s a sign of fast-growing demand for AI capabilities and the value of having diverse, scalable hardware to back up the Claude models.
Financial momentum and market context
Broadcom’s latest results show the upside of betting on AI infrastructure. The company posted Q4 revenue of $18.0 billion, with AI semiconductor revenue up 74% year over year.
Management thinks AI chip revenue could hit $100 billion by 2027, which would mean a multi-year boom in silicon-driven AI workloads. CEO Hock Tan pointed out momentum in AI-related semiconductor segments and sounded pretty bullish about Broadcom’s AI hardware portfolio as the company expands across Google, Anthropic, and other partners.
TPUs, AI hardware, and the broader ecosystem
TPUs are application-specific integrated circuits optimized for matrix multiply-and-accumulate operations. They power a bunch of Google services—Gemini, Search, Photos, Maps, you name it.
Anthropic’s Claude runs on a mix of hardware—AWS Trainium, NVIDIA GPUs, Google TPUs—and with Google Cloud expansion in 2025, that mix will only get broader.
The market’s seeing a surge in AI hardware investments, but there’s a looming supply-chain crunch. McKinsey’s analysis warns of possible semiconductor and chemical shortages by 2030, even as the market value heads toward the $1 trillion mark by the end of the decade.
Strategic implications for AI infrastructure
Broadcom, Google, and Anthropic seem to be playing a multi-pronged game here. They’re securing long-term hardware supply, chasing performance guarantees, and scaling high-demand AI models with extra compute capacity.
They’re also pouring investment into U.S. infrastructure to keep up with rapid growth. For anyone working in this space, the shifting landscape—think TPUs, GPUs, and Trainium—really underscores the need for interoperable ecosystems and robust supply chains.
The AI hardware market’s in the middle of a big investment cycle. Production lines are stretching out, and organizations might want to keep an eye on both technical performance and those policy-driven supply-chain twists that could mess with availability and cost down the line.
Here is the source article for this story: Broadcom, Anthropic & Google: Developing Custom AI Chips