Delivering the world’s first 102.4 terabits per second of switching capacity in a single chip, Broadcom has launched the Tomahawk 6 Chip, a product designed to meet the explosive demands of artificial intelligence (AI) infrastructure.
As AI models grow larger and more complex, AI data centers need faster, more efficient networks to keep up. To solve this problem, Broadcom says in their press release that Tomahawk offers unmatched speed, efficiency, and reliability for the next generation of AI data centers.
The Tomahawk 6 Chip stands out as the world’s first Ethernet switch to deliver 102.4 terabits per second (Tbps) of switching capacity on a single chip. This is double the bandwidth of any Ethernet switch available today, thereby marking a significant milestone for the industry. With this kind of capacity, the Tomahawk 6 can support up to 64 ports running at 1.6 terabits per second each, ushering in the era of Terabit Ethernet ports.
The massive increase in bandwidth enables AI clusters to scale up dramatically, supporting more than a million processing units (XPUs) in a single network. For AI data centers running large-scale AI workloads, such as training advanced language models or powering recommendation engines, this capacity is essential.
AI workloads push data center networks to their limits, demanding near-perfect network utilization and low latency. Traditional networks often operate at just 60-70% utilization, but the Tomahawk 6 is engineered for close to 100% efficiency, meaning more data can move faster, reducing bottlenecks and speeding up AI training and inference tasks.
Tomahawk 6 supports various network topologies, including scale-up, Clos, rail-only and torus configurations. This flexibility allows AI data center operators to optimize their networks for different types of AI workloads, ensuring that resources are used efficiently and performance remains high.
Another standout feature of the Tomahawk 6 is its Cognitive Routing 2.0 technology. This system uses advanced telemetry and dynamic congestion control to monitor network traffic in real time. If a failure or congestion is detected, the chip can reroute data in less than 500 nanoseconds, keeping AI jobs running smoothly. Also, packet trimming and adaptive flow control further enhance performance, especially for complex AI workloads like reinforcement learning and mixture-of-experts models.
In addition, the Tomahawk 6 is fully compliant with the latest Ultra Ethernet Consortium (UEC) specifications. This ensures that the chip can work seamlessly with other Ethernet-based equipment, giving data center operators more flexibility and avoiding vendor lock-in.
The launch of the Tomahawk 6 has generated significant excitement in the tech industry. Industry analysts believe the Tomahawk 6 gives Broadcom a strong competitive edge, especially as hyperscalers look for open, standards-based networking solutions that can keep pace with the rapid growth of AI.
According to Broadcom, demands from customers and partners have been unprecedented, with multiple deployments already planned involving more than 100,000 XPUs. By breaking the 100 tbps barrier and unifying scale-up and scale-out Ethernet, Broadcom has already set a new standard for AI infrastructure. The Tomahawk 6 Chip is now in volume production, and leading network equipment vendors as well as system integrators are preparing to roll it out in new AI data centers.