In a bit of a surprise this past weekend NVIDIA announced that it is purchasing the networking company Mellanox for approximately $6.9 billion US. NVIDIA and Intel were engaged in a bidding war for the Israel based company. At first glance we do not see the synergies that could potentially come from such an acquisition, but in digging deeper it makes much more sense. This is still a risky move for NVIDIA as their previous history of acquisitions have not been very favorable for the company (Ageia, Icera, etc.).
Mellanox’s portfolio centers around datacenter connectivity solutions such as high speed ethernet and InfiniBand products. They are already a successful company that has products shipping out the door. If there is a super computer somewhere, chances are it is running Mallanox technology for high speed interconnects. This is where things get interesting for NVIDIA.
While NVIDIA focuses on GPUS they are spreading into the datacenter at a pretty tremendous rate. Their NVLink implementation allows high speed connectivity between GPUS and recently they showed off their NVSwitch which features 18 ports. We do not know how long it took to design the NVSwitch and get it running at a high level, but NVIDIA is aiming for implementations that will exceed that technology. NVIDIA had the choice to continue in-house designs or to purchase a company already well versed in such work with access to advanced networking technology.
Intel was also in play for Mellanox, but that particular transaction might not have been approved by anti-trust authorities around the world. If Intel had made an aggressive bid for Mellanox it would have essentially consolidated the market for these high end networking products. In the end NVIDIA offered the $6.9B US for the company and it was accepted. Because NVIDIA has no real networking solutions that are on the market it will likely be approved without issue. Unlike other purchases like Icera, Mellanox is actively shipping product and will add to the bottom line at NVIDIA.
The company was able to purchase Mellanox in a cash transaction. They simply dove into their cash reserves instead of offering Mellanox shareholders equal shares in NVIDIA. This $6.9B is above what AMD paid for ATI back in 2006 ($5.4B). There may be some similarities here in that the price for Mellanox could be overvalued compared to what they actually bring to the table and we will see write downs over the next several years, much as AMD did for the ATI purchase.
The purchase will bring them instant expertise with high performance standards like InfiniBand. It will also help to have design teams versed in high speed, large node networking apply their knowledge to the GPU field and create solutions better suited for the technology. They will also continue to sell current Mellanox products.
Another purchase in the past that looks somewhat similar to this is AMD’s acquisition of SeaMicro. That company was selling products based on their Freedom Fabric technology to create ultra-dense servers utilizing dozens of CPUs. This line of products was discontinued by AMD after poor sales, but they expanded upon Freedom Fabric and created the Infinity Fabric that powers their latest Zen CPUs.
I can see a very similar situation occurring at NVIDIA. AMD is using their Infinity Fabric to connect multiple chiplets on a substrate, as well as utilizing that fabric off of the substrate. It also has integrated that fabric into their latest Vega GPUs. This philosophy looks to pay significant dividends for AMD once they introduce their 7nm CPUs in the form of Zen 2 and EPYC 2. AMD is not relying on large, monolithic dies for both their consumer and enterprise parts, thereby improving yields and bins on these parts as compared to what Intel does with current Xeon parts.
When looking at the Mellanox purchase from this view, it makes a lot of sense for NVIDIA. With process node advances moving at a much slower pace, the demand for higher performance solutions is only increasing. To meet this demand NVIDIA will be required to make efficient, multi-chip solutions that may require more performance and features than what can be covered by NVLINK. Mellanox could potentially provide the expertise and experience to help NVIDIA achieve such scale.
AMD’s Infinity Fabric is just
AMD’s Infinity Fabric is just a superset of HyperTransport and that freedom Fabric appears to be some I/O related fabric virtualization related IP for dense/low power servers. Who knows where AMD may be maing use of any Freedom Fabric IP as the SeaMicro/SeaMicro Based branding is not used since AMD’s SeaMicro division was shuttered.
“Infinity Fabric[edit]
Infinity Fabric is a superset of HyperTransport announced by AMD in 2016 as an interconnect for its GPUs and CPUs. It is also usable as interchip Interconnect for communication between CPUs and GPUs.[6][7] The company said the Infinity Fabric would scale from 30 GB/s to 512 GB/s, and be used in the Zen-based CPUs and Vega GPUs which were subsequently released in 2017. (1)
(1)
“HyperTransport”
https://en.wikipedia.org/wiki/HyperTransport