The company that everyone credits for the GPU boom has been quietly building something almost as large right beside it. The origin goes back to a $7 billion acquisition in 2020 that Jensen Huang’s team made when almost nobody understood why.
The artificial intelligence news cycle tends to produce the same story about Nvidia over and over. GPUs. H100s. Blackwell. Demand outstripping supply. Stock price. The company is worth this many trillions. What that coverage consistently underweights is the business sitting directly next to the chip operation, quietly generating revenue at a pace that would make it one of the biggest networking companies in the world if it stood alone.
Last quarter, Nvidia’s networking division reported $11 billion in revenue, a year-over-year increase of 267%. For the full year, it brought in more than $31 billion. That is not a projection or a forecast. That is money already earned, reported in Nvidia’s most recent earnings, and it makes the division the company’s second-largest revenue driver behind compute. The number is particularly striking in context: Kevin Cook, a senior equity strategist at Zacks Investment Research, told TechCrunch that Nvidia’s networking business does in one quarter what Cisco’s entire networking business does in a year.
A $7 Billion Bet That Almost Nobody Noticed at the Time
None of this emerged from a product roadmap that the market was watching. It grew from a single acquisition that Jensen Huang made in 2020 when the current AI boom had not yet started.
The company Nvidia bought was Mellanox Technologies, a networking hardware firm founded in Israel in 1999. The price was $7 billion. At the time, it was the largest acquisition Nvidia had ever made, and the strategic logic was not immediately obvious to outside observers. Mellanox made interconnects, the hardware that lets computers talk to each other across a data center. That seemed like a plumbing play for a company whose identity was built on graphics cards.
Kevin Deierling, a senior vice president at Nvidia who joined through the Mellanox acquisition, told TechCrunch he did not fully understand why Huang bought the company when the deal was announced. He gets it now. “When Jensen bought Mellanox in 2020, he saw that was the missing piece to make GPUs a complete package,” Cook said. Huang told Deierling on day one of the acquisition: “The data center is the new unit of computing.” Networking was not a peripheral. It was foundation.
What Huang understood, and what the market eventually caught up to, is that training a large AI model is not a chip problem at its core. It is a systems problem. Thousands of GPUs have to work in parallel, which means they have to communicate with each other at speed. If the networking layer is slow, the GPUs wait. You can have the most powerful chips available and still bottleneck the whole operation at the interconnect.
What Nvidia’s Networking Business Actually Is
The technologies that make up this division are not ones that show up in mainstream artificial intelligence news very often. NVLink enables GPU-to-GPU communication within a data center rack, the high-speed channel that allows chips to share memory and pass data without going through slower external connections. InfiniBand Switches handle in-network computing at scale. Spectrum-X is Nvidia’s Ethernet platform built specifically for AI workloads, a meaningful distinction from general-purpose Ethernet that was designed for different traffic patterns. Co-packaged optics switches round out the portfolio.
Together, Deierling says these components provide everything needed to build what Nvidia calls an “AI factory,” a data center purpose-built for training AI models rather than running general enterprise workloads. The term is useful because it captures how different the infrastructure requirements are. An AI factory is not a faster version of a conventional data center. It is a different kind of machine entirely, and networking is what makes it function.
Deierling is direct about what distinguishes Nvidia’s approach. “I can’t think of other companies that have the full-stack capabilities that we have,” he said. “We are really different. We build the full compute stack, fully integrated stack, and then we go to market through all of our partners.” That last point matters operationally. Nvidia does not sell networking hardware directly to end users. It moves product through a partner ecosystem, which has allowed it to scale the business without building out a direct enterprise sales force that could compete with or complicate its existing customer relationships.
The division receives considerably less public attention than either Nvidia’s chip business or even its gaming business, which is nearly three times smaller by revenue. Deierling acknowledged this with a degree of self-deprecation. He told TechCrunch that people not knowing about the networking division might be his fault for doing a bad job of marketing it, then rejected that answer as insufficient. “It’s no longer a peripheral to connect the printer, some other slow I/O device,” he said. “It’s fundamental to the computer. In the old days, we had what was called the back lining inside the computer. Today, the network is the back lining of the AI factory, and it’s super important.”
What Huang Announced at GTC That Extends the Bet
Jensen Huang used his GTC 2026 keynote in San Jose on March 16 to expand the networking thesis further. Nvidia launched the Rubin platform, a new architecture that includes six new chips to power an AI supercomputer. The company also announced the Nvidia Inference Context Memory Storage platform and more efficient Nvidia Spectrum-X Ethernet Photonics switches.
These announcements matter for the networking division specifically because they extend the integration between compute and data movement. Each generation of AI chips Nvidia releases creates new demand for the networking infrastructure that connects those chips. Rubin is not just a compute story. It is a networking story too, because the AI factories being built around Rubin require Spectrum-X and NVLink and InfiniBand to function at the performance levels the chips are capable of delivering.
Cook framed the competitive picture in a way that explains why the networking expansion may be as strategically important as the chips business itself over time. “When Jensen bought Mellanox in 2020, he saw that was the missing piece to make GPUs a complete package.” If a customer wants the full performance of an Nvidia GPU cluster, they need Nvidia networking to get there. That dependency is not accidental. It is the architecture of a business designed to be very difficult to replace once it is inside your data center.
For deeper coverage of AI infrastructure, semiconductor strategy, and the companies building the physical foundation of the AI economy, The Tech Marketer covers the stories that matter to both investors and practitioners.
FAQ
Q1: How big is Nvidia’s networking division and why does it matter for artificial intelligence news? Nvidia’s networking division generated $11 billion in its most recent quarter, a 267% year-over-year increase, and more than $31 billion for the full fiscal year. That makes it the company’s second-largest revenue segment behind compute. It matters for artificial intelligence news because the division provides the interconnect infrastructure that allows thousands of GPUs to function as a unified AI supercomputer. Without it, the chips cannot scale. Kevin Cook of Zacks Investment Research noted Nvidia’s networking business now does in a single quarter what Cisco’s entire networking operation does in a year.
Q2: Where did Nvidia’s networking business come from? The foundation was Mellanox Technologies, a networking hardware company founded in Israel in 1999. Nvidia acquired Mellanox in 2020 for $7 billion, at the time the company’s largest ever acquisition. Kevin Deierling, who joined Nvidia through that deal and now serves as Senior Vice President of Networking, said Jensen Huang told him on day one that “the data center is the new unit of computing.” Cook said Huang saw Mellanox as the missing piece to make GPUs a complete package, connecting individual chips into a system capable of training large AI models.
Q3: What specific technologies make up Nvidia’s networking division? The division includes NVLink, which powers communication between GPUs within a data center rack; Nvidia InfiniBand Switches, an in-network computing platform; Spectrum-X, Nvidia’s Ethernet platform purpose-built for AI workloads; and co-packaged optics switches. Together these components provide the full infrastructure for what Nvidia calls an “AI factory,” a data center designed specifically for training AI models rather than running general enterprise workloads.
Q4: How does Nvidia’s networking business strengthen its competitive position? Nvidia sells its networking products only as a full-stack solution through partners, not as individual components. This means customers building AI factories with Nvidia GPUs also buy into Nvidia’s networking ecosystem, creating a dependency that is difficult to replace once in place. Competitors like AMD, Intel, and Broadcom must overcome both the compute preference and the networking lock-in to win AI data center business. Nvidia CTO Kevin Deierling described the competitive position this way: he cannot think of other companies with the same full-stack capabilities across compute and networking combined.
Q5: What did Nvidia announce at GTC 2026 to extend its networking strategy? At the GTC 2026 conference in San Jose on March 16, Jensen Huang announced the Nvidia Rubin platform with six new chips for an AI supercomputer, the Nvidia Inference Context Memory Storage platform, and improved Nvidia Spectrum-X Ethernet Photonics switches. These announcements extend the integration between compute and networking, as each new generation of Nvidia chips creates fresh demand for the networking infrastructure that connects them. The Rubin platform is not only a compute announcement. It is also a demand signal for the Spectrum-X and NVLink infrastructure that AI factories built around Rubin will require.
Sources & References
- TechCrunch, Rebecca Szkutak, Nvidia Is Quietly Building a Multibillion-Dollar Behemoth to Rival Its Chips Business
- IndexBox, Nvidia’s Networking Division Becomes Second-Largest Revenue Source
- TechBuzz AI, Nvidia’s $11B Networking Division Quietly Rivals Chip Empire
- Nvidia Earnings, Q4 and Fiscal 2026 Financial Results
- Nvidia Newsroom, Rubin Platform AI Supercomputer, GTC 2026





