US Dept. of Energy Announces Frontier Supercomputer: Cray and AMD to Build 1.5 Exaflo - harlan4096 - 10 May 19
Quote:
The history of the computing industry is one of constant progress. Processors get faster, storage gets cheaper, and memory gets denser. We see the repercussions of this advancement through all aspects of society, and that extends to the top as well, where national governments continue to invest in bigger and better supercomputers. One part technological necessity and one part technological race, the exascale era of supercomputers is about to begin, as orders for the first exaFLOP-capable are now going out. It’s only fitting then that this morning the United States Department of Energy is announcing the contract for their fastest supercomputer yet, the Frontier system, which will be built by Cray and AMD.
Frontier is planned for delivery in 2021, and when it’s activated it will become the second and most powerful of the US DOE’s two planned 2021 exascale systems, with performance expected to reach 1.5 exaFLOPS. The ambitious system won’t come cheaply, however; with a price tag of over 500 million dollars for the system alone – and another 100 million dollars for R&D – Frontier is among the most expensive supercomputers ever ordered by the US Department of Energy.
The new supercomputer is being built as part of the US DOE’s CORAL-2 program for supercomputers, with Frontier scheduled to replace Oak Ridge National Laboratory’s current Summit supercomputer. Summit is the current reigning champion in the supercomputer world, with 200 petaFLOPS of performance, and accordingly the US DOE and Oak Ridge are aiming to significantly improve on its performance for the new computer. All told, Frontier should be able to deliver over 7x the performance of Summit, and is expected to be the fastest supercomputer in the world once it’s activated.
Like Summit (and Titan before it), Frontier is an open science system, meaning that it’s available to academic researchers to run simulations and experiments on. Accordingly, the lab is expecting the supercomputer to be used for a wide range of projects across numerous disciplines, including not only traditional modeling and simulation tasks, but also more data-driven techniques for artificial intelligence and data analytics. In fact the latter is a bit of new ground for the lab and the system’s eventual users; just as we’ve seen in the enterprise space over the past few years, neural network-based AI is becoming an increasingly popular technique to solve problems and extract analysis from large datasets, and now researchers are looking at how to refine those techniques from the current-generation systems and apply them to exascale-level projects.
Continue Reading
|