AMD's CPU-to-GPU Infinity Fabric Detailed
#1
Information 
Quote:
[Image: 6hgGThDYWY4uZrYrGk9p8V-970-80.jpg]

Don't move the data.

AMD is currently the only vendor with both x86 processors and discrete graphics cards under one roof, at least until Intel's Xe graphics roll out, giving Team Red some flexibility with its interconnect technology. This tech has been particularly useful in the world of high-performance computing (HPC), as evidenced by an AMD presentation at the Rice Oil and Gas HPC conference yesterday.

AMD initially announced at its Next Horizon event in 2018 that it would extend its Infinity Fabric between the data center MI60 Radeon Instinct GPUs to enable a 100 Gbps link between GPUs, much like Nvidia's NVLink. But with its Frontier supercomputer announcement in May, AMD divulged that it would expand the approach to enable memory coherency between CPUs and GPUs.

The annual Rice Oil and Gas HPC event hasn't concluded yet, but according to a tweet from Intersect 360 Research analyst Addison Snell yesterday, AMD announced that future Epyc+Radeon generations will include shared memory/cache coherency between the GPU and CPU over the Infinity Fabric, similar to what AMD enabled in its Raven Ridge Ryzen products.

We also got a glimpse of some slides presented at Rice Oil and Gas, courtesy of a tweet from Extreme Computing Research Center senior research scientist Hatem Ltaief.

AMD's charts highlight the divide between power efficiency of various compute solutions, like semi-custom SoCs and FPGAs, GPGPUs, and general purpose x86 compute cores, and highlights the FLOPS performance relative to both power consumed and the amount of silicon area required to deliver that performance. As we can see, general purpose CPUs lag behind, but optimizations for vectorized code that use dedicated SIMD pathways can boost performance in both metrics. However, GPUs still hold a commanding lead in terms of both power efficiency and area consumed.

Leveraging cache coherency, like the company does with its Ryzen APUs , enables the best of both worlds and, according to the slides, unifies the data and provides a "simple on-ramp to CPU+GPU for all codes."

AMD also provided some examples of the code required to use a GPU without unified memory, while accommodating a unified memory architecture actually alleviates much of the coding burden.

AMD famously embraced the Heterogeneous Systems Architecture (HSA - deep dive here) to tie together Carrizo's fixed-function blocks, touting that feature among its marketing materials. Much like the approach of extending an Infinity Fabric link between the CPU and GPU, HSA provides a pool of cache-coherent shared virtual memory that eliminates data transfers between components to reduce latency and boost performance.

For instance, when a CPU completes a data processing task, the data may still require processing in the GPU. This requires the CPU to pass the data from its memory space to the GPU memory, after which the GPU then processes the data and returns it to the CPU. This complex process adds latency and incurs a performance penalty, but shared memory allows the GPU to access the same memory the CPU was utilizing, thus reducing and simplifying the software stack.

Data transfers often consume more power than the actual computation itself, so eliminating those transfers boosts both performance and efficiency, and extending those benefits to the system level by sharing memory between discrete GPUs and CPUs gives AMD a tangible advantage over its competitors in the HPC space.

While AMD still appears to be a member of the HSA foundation, it no longer actively promotes HSA in communications with the press. In either case, it's clear the core tenets of the open architecture live on in AMD's new proprietary implementation, which likely leans heavily on its open ROCm software ecosystem that is now enjoying the fruits of DOE sponsorship.

AMD has blazed a path in this regard and secured big wins for exascale-class systems, but Intel is also working on its Ponte Vecchio architecture that will power the Aurora supercomputer at the U.S. Department of Energy's (DOE's) Argonne National Laboratory. Intel's approach leans heavily on its OneAPI programming model and also aims to tie together shared pools of memory between the CPU and GPU (lovingly named Rambo Cache). It will be interesting to learn more about the differences between the two approaches as more information trickles out.

Meanwhile, Nvidia might suffer in the supercomputer realm because it doesn't produce both CPUs and GPUs and, therefore, cannot enable similar functionality. Is this type of architecture, and the underlying unified programming models, required to hit exascale-class performance within acceptable power envelopes? That's an open question, but while both AMD and Intel have won exceedingly important contracts for the U.S. DOE's exascale-class supercomputers (the broader server ecosystem often adopts the winning HPC techniques), Nvidia hasn't made any announcements about such wins, despite its dominating position for GPU-accelerated artificial intelligence workloads in the HPC and data center space.
...
Continue Reading
Reply


Forum Jump:


Users browsing this thread: 2 Guest(s)
[-]
Welcome
You have to register before you can post on our site.

Username/Email:


Password:





[-]
Recent Posts
uBOLite_2024.12.23.23
uBOLite_2024.12.23...harlan4096 — 10:29
You found a seed phrase from someone els...
Scammers have inve...harlan4096 — 09:58
Google files remedies proposal in DOJ's ...
The U.S. Departmen...harlan4096 — 09:48
PowerToys 0.87.1
PowerToys 0.87.1 ...harlan4096 — 09:46
GFYI [Official] EaseUS Christmas 2024 B...
Merry Christmas and ...zevish — 08:07

[-]
Birthdays
Today's Birthdays
No birthdays today.
Upcoming Birthdays
No upcoming birthdays.

[-]
Online Staff
There are no staff members currently online.

>