Nvidia chose AMD over Intel for its most powerful product yet – here’s why

Last week, Nvidia made an announcement that shook the industry as for the first time ever, it swept aside its decades-old rivalry with AMD, selecting the EPYC server processor for its DGX A100 deep learning system and casting aside Intel’s Xeon.

In a statement to CRN, Charlie Boyle, Vice President and General Manager of DGX Systems at Nvidia, explained the rationale behind the switch.

“To keep the GPUs in our system supplied with data, we needed a fast CPU with as many cores and PCI lanes as possible. The AMD CPUs we use have 64 cores each, lots of PCI lanes, and support PCIe Gen4,” he said.

Intel is expected to add PCIe 4.0 to its feature list when it launches the 10nm Ice Lake server chip later this year but, for now, can only sit and watch as AMD nibbles away at its market share. EPYC also supports eight-channel memory, two more than Intel’s Xeon Scalable processors.

The EPYC 7742 delivers more cores (64 vs 56 with the Intel Xeon Platinum 9282) with significantly more cache onboard (256MB vs 77MB), a lower TDP (225W vs 400w) and a far lower price tag ($6,950 vs circa $25,000).

These marked improvements are all thanks to AMD’s much finer 7nm manufacturing process, which allows far more transistors to be packed together, optimising power consumption and clock speeds.

Time will tell whether the move marks a permanent thawing of the relationship between Nvidia and AMD, or just a temporary truce.

Article source:

Save & Share Cart
Your Shopping Cart will be saved and you'll be given a link. You, or anyone with the link, can use it to retrieve your Cart at any time.
Back Save & Share Cart
Your Shopping Cart will be saved with Product pictures and information, and Cart Totals. Then send it to yourself, or a friend, with a link to retrieve it at any time.
Your cart email sent successfully :)

Scroll to Top