OpenAI & Broadcom announce 10 GW AI chip collaboration — a new era of custom intelligence

OpenAI & Broadcom announce 10 GW AI chip collaboration — a new era of custom intelligence

OpenAI and Broadcom just unveiled a massive strategic collaboration to develop and deploy 10 gigawatts of OpenAI-designed AI accelerators — marking a decisive step toward self-built AI hardware at planetary scale.

🧠 The vision:
OpenAI will design its own accelerators and systems, embedding insights from its frontier models directly into silicon — effectively fusing model intelligence with hardware design.

Broadcom will handle Ethernet-based scaling and connectivity, powering racks that will deploy across OpenAI’s global data centers and partner facilities.

💬 Sam Altman, OpenAI CEO:

“Partnering with Broadcom is a critical step in building the infrastructure needed to unlock AI’s potential… Developing our own accelerators lets us embed what we’ve learned directly into the hardware.”

💬 Hock Tan, Broadcom CEO:

“This collaboration marks a pivotal moment in the pursuit of AGI. Together, we’ll deploy 10 GW of next-gen accelerators to pave the way for AI’s future.”


🔍 Why This Matters

  • 10 gigawatts = roughly the power draw of 10 large nuclear plants — a scale never seen in AI infrastructure.
  • Custom AI chips will allow OpenAI to optimize performance, power efficiency, and cost beyond what NVIDIA or AMD can offer.
  • Ethernet-first architecture signals a shift away from proprietary interconnects (like NVLink), embracing open scalability for AI datacenters.

With 800M+ weekly active users and growing enterprise adoption, OpenAI is rapidly transforming from a model developer into a full-stack AI infrastructure company — spanning chips, clusters, and cloud.

This isn’t just a chip deal.
It’s the foundation for the next phase of artificial general intelligence.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *