Home Tech Update IBM’s new AI-friendly server adopts Nvidia’s NVLink for faster memory

IBM’s new AI-friendly server adopts Nvidia’s NVLink for faster memory

by Rajdeep

GPUs are a proven way to speed up the time-consuming task of machine learning, a crucial element of the recent rapid expansion of the use of AI solutions in many industries. The result has been an explosively-growing new market for GPU vendors Nvidia and AMD. IBM’s newly announced Power Systems S822LC aims to push machine learning performance even further — with two IBM POWER8 CPUs and four Nvidia Tesla P100 GPUs.

However, no matter how fast a GPU is, the large data requirements of AI applications means that memory access and inter-processor communications can quickly become a bottleneck. So IBM is also using Nvidia’s proprietary NVLink interconnect technology to address that problem.

The S822LC is slated to deliver 21 teraflops of half-precision operations; machine learning typically doesn’t need full or double precision for training neural networks, for example. Customers can also attach additional Tesla K80 GPUs over a more traditional PCIe bus.

NVLink dramatically improves memory access over PCI-e

Nvidia claims typical AI and other data intensive workloads can run as much as twice as fast using NVLink for memory accessNvidia announced NVLink at last year’s GTC, and its Pascal-based GPUs are the first to support it. It is used both for communication between CPUs and GPUs, and between multiple GPUs. In raw data rates, Nvidia says it is 5 to 12 times faster than PCIe Gen 3 interconnects — yielding as much as a doubling in real world performance for data-intensive GPU applications.

As part of the announcement, IBM cited raw interconnect performance improvements from 16 GB/s over PCIe to 40 GB/s using NVLink. IBM has made a huge investment in what it calls cognitive computing, so it makes perfect sense that it would implement a version of its POWER8 processor with the highest performance interconnect possible. IBM says some of the early units will ship to high-profile customers, including Oak Ridge National Labs and Lawrence Livermore National Labs. The systems will be test beds in preparation for IBM’s Summit and Sierra supercomputers due in 2017.

IBM and Nvidia want developers to jump on the bandwagon

To help drive deployments, IBM and Nvidia are establishing a lab for developers. The IBM-Nvidia Acceleration Lab will work with client developers to get the best possible performance from the new systems. IBM has invited interested developers to contact them directly (email link) for more information.

[Source:- extremetech]

You may also like

error: Content is protected !!