Tuesday, March 10, 2026

The AI ​​boom is driving the need for speed in chip networks

Share

A modern era from Silicon Valley works on the web – not the kind you find on LinkedIn.

As the tech industry funnels billions into AI data centers, both enormous and petite chipmakers are innovating with technology that connects chips to other chips and server racks to other server racks.

Networking technology has been around since the dawn of computing, critically connecting mainframe computers so they can share data. In the world of semiconductors, networking plays a role at almost every level of the stack – from the connections between transistors on the chip itself to the external connections between boxes or racks of chips.

Chip giants like Nvidia, Broadcom and Marvell already have established bona fide online presences. But as artificial intelligence booms, some companies are looking to modern networking approaches to lend a hand them accelerate the huge amounts of digital information flowing through data centers. This is where high-tech startups like Lightmatter, Celestial AI and PsiQuantum come to the rescue, using optical technology to accelerate high-speed computation.

Optical technology, or photonics, is maturing. According to PsiQuantum co-founder and chief science officer Pete Shadbolt, the technology was considered “poor, expensive and marginally useful” for 25 years until the artificial intelligence boom reignited interest in it. (Shadbolt appeared on a panel co-hosted by WIRED last week.)

Some venture capitalists and institutional investors, hoping to catch the next wave of chip innovation or at least find a suitable acquisition target, are pouring billions into such startups that have found modern ways to speed up the flow of data. They believe that conventional electron-based interconnection technology simply cannot keep up with the growing demand for high-bandwidth AI workloads.

“If you look back at the past, networking was really boring because it was all about switching packets of bits,” says Ben Bajarin, a longtime technology analyst and CEO of research firm Artistic Strategies. “Now, because of artificial intelligence, you have to move quite heavy loads, and that’s why you’re seeing speed innovations.”

Substantial chip energy

Bajarin and others credit Nvidia for anticipating the importance of networking when it made two key acquisitions in the technology years ago. In 2020, Nvidia spent nearly $7 billion to acquire Israeli company Mellanox Technologies, which produces high-speed networking solutions for servers and data centers. Shortly thereafter, Nvidia purchased Cumulus Networks to power its Linux-based computer networking software system. This was a turning point for Nvidia, which correctly assumed that the GPU and its parallel computing capabilities would become much more powerful when clustered with other GPUs and placed in data centers.

While Nvidia dominates vertically integrated GPU stacks, Broadcom has become a key player in the market for custom chip accelerators and high-speed networking technologies. The $1.7 trillion company works closely with Google, Meta and most recently OpenAI on data center chips. It is also a leader in silicon photonics. Reuters reported this last month Broadcom is preparing a new network system called Thor Ultra, designed to provide a “critical connection between the AI ​​system and the rest of the data center.”

Last week on an earnings call, semiconductor design giant Arm announced plans to acquire networking company DreamBig for $265 million. DreamBig creates AI chiplets – petite, modular circuits designed to be combined into larger chip systems – in partnership with Samsung. The startup has “interesting intellectual property… which [is] very critical to growing and scaling the network,” ARM CEO Rene Haas said on an earnings call. (This means connecting components and transferring data up and down within a single chip cluster, as well as connecting chip racks to other racks).

Turn on the light

Lightmatter CEO Nick Harris does he noticed that the amount of computing power required by artificial intelligence is now doubling every three months – much faster than Moore’s Law. Computer chips are getting bigger. “Whenever you get to the cutting-edge level of the biggest chips you can build, all the subsequent performance comes from connecting the chips together,” Harris says.

His company’s approach is groundbreaking and does not rely on conventional network technology. Lightmatter builds silicon photonics that connects chips together. It claims to be creating the world’s fastest photonics engine for AI chips, essentially a three-dimensional stack of silicon connected by light-based interconnection technology. Over the past two years, the startup has raised more than $500 million from investors such as GV and T. Rowe Price. Last year, its valuation reached $4.4 billion.

Latest Posts

More News