Skip to content
Tech News
← Back to articles

Why Nvidia just poured $2 billion into AI ASIC competitor Marvell — NVLink Fusion turns into soft ecosystem lock-in

read original get Nvidia NVLink Bridge → more articles
Why This Matters

Nvidia's $2 billion investment in Marvell and the integration of Marvell's custom AI accelerators into the NVLink Fusion ecosystem signifies a strategic move to expand its influence over heterogeneous AI infrastructure. This partnership not only enhances Nvidia's ecosystem lock-in but also ensures continued revenue generation from third-party silicon, potentially reshaping competition and collaboration in AI hardware development. For consumers and the tech industry, this indicates a shift towards more integrated, high-bandwidth AI systems that could accelerate AI deployment and innovation.

Key Takeaways

On Tuesday, Nvidia announced that it has invested $2 billion in Marvell Technology and entered a partnership through NVLink Fusion, the rack-scale platform that allows third-party silicon to plug into Nvidia's proprietary interconnect fabric. The deal covers custom XPUs, NVLink-compatible scale-up networking, silicon photonics, and AI-RAN infrastructure for 5G and 6G networks.

It’d be an understatement to say that this deal is unusual, given Marvell’s status as one of the two dominant custom ASIC design houses, alongside Broadcom. Its fastest-growing business is designing the custom AI accelerators that hyperscalers like AWS, Microsoft, and Google use to reduce their dependence on Nvidia GPUs.

But by pulling one of its most capable indirect competitors deeper into the NVLink ecosystem, the structure of NVLink Fusion means that arrangement will generate Nvidia revenue on every rack deployed. Here's how.

Article continues below

NVLink Fusion

NVLink Fusion, announced at Computex 2025 last May, enables heterogeneous AI infrastructure where non-Nvidia accelerators can communicate with Nvidia GPUs, CPUs, and networking hardware over NVLink's high-bandwidth, low-latency fabric. NVLink delivers up to 1.8 TB/s per GPU, a huge bandwidth advantage over PCIe Gen5, and can scale to 72 accelerators per rack in its NVL72 configuration.

The platform is built around the OCP MGX rack architecture and includes a modular technology stack consisting of Nvidia GPUs, Vera CPUs, NVLink switch silicon, ConnectX SuperNICs, Bluefield DPUs, Spectrum-X switches, and Mission Control management software. Partners can plug their own custom XPUs or CPUs into the compute layer, but the surrounding infrastructure is all Nvidia.

Every NVLink Fusion platform must include at least one Nvidia product, whether a CPU, GPU, or switch. Nvidia has also retained control over which partners receive NVLink IP licenses, so custom chips designed to displace Nvidia's GPUs still generate the company revenue through infrastructure sales every time a rack goes live. Under the deal, Marvell will provide custom XPUs and NVLink Fusion-compatible scale-up networking, while Nvidia will supply the rest of the stack, including Vera CPUs, ConnectX NICs, Bluefield DPUs, NVLink interconnect, and Spectrum-X switches.

Marvell's ASIC business

Marvell reported $8.2 billion in revenue for its fiscal year 2026 ending January 2026, with data center revenue accounting for more than 74% of the total.

... continue reading