Nvidia Preps A100 GPU with 80GB of HBM2E Memory
#1
Information 
Quote:
[Image: vkoYB5GWjcGw7ARyRKLqSb-1024-80.png.webp]

80 Giga...what?

Nvidia recently added a yet-unannounced version of its A100 compute GPU with 80GB of HBM2E memory in a standard full-length, full-height (FLFH) card form-factor, meaning that this beastly GPU drops into a PCIe slot just like a 'regular' GPU. Given that Nvidia's compute GPUs like A100 and V100 are mainly aimed at servers in cloud data centers, Nvidia prioritizes the SXM versions (which mount on a motherboard) over regular PCIe versions. That doesn't mean the company doesn't have leading-edge GPUs in a regular PCIe card form-factor, though. 

Nvidia's A100-PCIe accelerator based on the GA100 GPU with 6912 CUDA cores and 80GB of HBM2E ECC memory (featuring 2TB/s of bandwidth) will have the same proficiencies as the company's A100-SXM4 accelerator with 80GB of memory, at least as far compute capabilities (version 8.0) and virtualization/instance capabilities (up to seven instances) are concerned. There will of course be differences as far as power limits are concerned. 

Nvidia has not officially introduced its A100-PCIe 80GB HBM2E compute card, but since it is listed in an official document found by VideoCardz, we can expect the company to launch it in the coming months. Since the A100-PCIe 80GB HBM2E compute card has not been launched yet, it's impossible to know the actual pricing. CDW's partners have A100 PCIe cards with 40GB of memory for $15,849 ~ $27,113 depending on an exact reseller, so it is pretty obvious that an 80GB version will cost more than that. 

Nvidia's proprietary SXM compute GPU form-factor has several advantages over regular PCIe cards. Nvidia's latest A100-SXM4 modules support a maximum thermal design power (TDP) of up to 400W (both for 40GB and 80GB versions) since it is easier to supply the necessary amount of power to such modules and it is easier to cool them down (for example, using a refrigerant cooling system in the latest DGX Station A100). In contrast, Nvidia's A100 PCIe cards are rated for up to 250W. Meanwhile, they can be used inside rack servers as well as in high-end workstations.

Nvidia's cloud datacenter customers seem to prefer SXM4 modules over cards. As a result, Nvidia first launched its A100-SXM4 40GB HBM2E module (with 1.6TB/s of bandwidth) last year and followed up with a PCIe card version several months after. The company also first introduced its A100-SXM4 80GB HBM2E module (with faster HBM2E) last November but only started shipping it fairly recently.
...
Continue Reading
[-] The following 1 user says Thank You to harlan4096 for this post:
  • silversurfer
Reply


Messages In This Thread
Nvidia Preps A100 GPU with 80GB of HBM2E Memory - by harlan4096 - 27 June 21, 08:56

Forum Jump:


Users browsing this thread: 1 Guest(s)
[-]
Welcome
You have to register before you can post on our site.

Username/Email:


Password:





[-]
Recent Posts
QOwnNotes
26.4.3  When pres...Kool — 05:38
K-Lite Codec Pack 19.6.0 / 19.6.5 Update
Changes in 19.6.5 ...harlan4096 — 07:54
Hardware Monitor 1.63
Hardware Monitor 1...harlan4096 — 07:53
Adobe Acrobat Reader DC 26.001.21367
Adobe Acrobat Read...harlan4096 — 07:50
QOwnNotes
26.4.2  Improved ...Kool — 06:42

[-]
Birthdays
Today's Birthdays
avatar (43)Sanfordbup
avatar (38)Der.Reisende
Upcoming Birthdays
avatar (45)wapedDow
avatar (49)oapedDow
avatar (42)Sanchowogy
avatar (46)MeighGoask
avatar (47)creatralGuelm
avatar (38)procnipsut
avatar (44)accenwibly
avatar (41)ahyvily
avatar (38)urumahiz
avatar (44)techlignub
avatar (43)Stevenmam
avatar (50)onlinbah
avatar (50)fuspeukChark
avatar (44)werriewWaiNg
avatar (38)Freemanleo
avatar (43)cdoubapKit
avatar (38)lystraPonia
avatar (31)smith8395john
avatar (51)steakelask
avatar (45)Termoplenka
avatar (43)bycoPaist
avatar (49)pieloKat
avatar (43)ilyagNeexy
avatar (51)donitascene
avatar (51)burntLaw
avatar (41)MrDoorsskibheeds
avatar (51)Toligo
avatar (46)Rodneykak
avatar (49)tradeSmode
avatar (39)vemedProkbior
avatar (38)RobertUtelt
avatar (46)JamesZic
avatar (36)Kiran78

[-]
Online Staff
There are no staff members currently online.

>