Google is officially dividing its next-generation Google TPUv8 into two purpose-built processors to handle the distinct demands of artificial intelligence. Expected to be formally unveiled this week at the Google Cloud Next 2026 event in Las Vegas, the tech giant is partnering with Broadcom to design a high-performance training accelerator and MediaTek for a cost-optimized inference chip.
Why Google Splits TPUv8 for AI Workloads
The decision to split the silicon highlights a major shift in the data center industry. Tech giants are realizing that a one-size-fits-all approach is no longer efficient for scaling heavy artificial intelligence workloads.
The new dual lineup replaces the current 2025 TPUv7 “Ironwood” series. By separating the design architecture, Google can optimize raw matrix throughput for massive data training, while simultaneously maximizing power efficiency to lower the cost per inference.
The Sunfish and Zebrafish Chips Explained
According to industry and supply chain reports, the upcoming TPUv8 family is composed of two primary variants:
TPUv8t (Codename "Sunfish"): Designed by Broadcom, this accelerator is engineered strictly for high-performance AI model training. It prioritizes massive high-bandwidth memory (HBM) capacity and complex multi-socket coherency.
TPUv8i (Codename "Zebrafish"): Developed by MediaTek, this chip focuses entirely on efficient AI inference. It optimizes die-area and latency-focused I/O for cost-effective cloud and edge deployments.
Both processors will be tightly integrated with Google’s custom Axion Arm CPUs (based on the Neoverse N3 architecture). They will also rely heavily on advanced packaging techniques like CoWoS (Chip-on-Wafer-on-Substrate) and dense HBM stacks. Additionally, Google is reportedly in talks with Marvell Technology to develop an associated memory processing unit (MPU) to further offload system memory requirements.
Impact on the Semiconductor Supply Chain
This strategic pivot accelerates the industry's move away from entirely in-house ASIC development, shifting toward a partner-driven model to ensure global scale.
The massive incoming orders for Google's expanding AI ecosystem are projected to squeeze an already constrained semiconductor supply chain. Manufacturers of advanced wafer-level interposers, optical switches, and liquid cooling infrastructure are expected to see surging demand throughout the remainder of the decade as data centers retool for these specialized chips.
PAA FAQs
What is the difference between Google TPUv8t and TPUv8i?
The TPUv8t (Sunfish) is a high-performance accelerator designed by Broadcom specifically for training massive AI models. The TPUv8i (Zebrafish) is a MediaTek-designed chip optimized strictly for cost-effective and power-efficient AI inference.
When will Google release the TPUv8?
Google is expected to officially detail the TPUv8 architecture at the Google Cloud Next 2026 conference, taking place April 22–24 in Las Vegas.
Who is manufacturing the Google TPUv8?
While Google owns the ecosystem and intellectual property, Broadcom is handling the design of the training-focused TPUv8t, and MediaTek is designing the inference-focused TPUv8i




Responses (0)