Frontier can be used earlier than previously promised – almost the entire system is now operational. Thus, Frontier achieves a computing power of 1.1 exaflops in the Linpack benchmark with double (FP64) accuracy, at the top there are even almost 1.7 exaflops and with AI calculation, thanks to the optimized matrix calculator, almost 6.9 exaflops. An exaflop equals a trillion calculations per second, or in other words, a 1 followed by 18 zeros.
And that doesn’t even seem to be the final form of Frontier, because purely mathematically, the supercomputer is still missing a few thousand GPU accelerators, which could boost the system to up to 2 exaflops of continuous computing power in the next few months.
Specifically, there should be 9408 AMD processors of the type Epyc 7A53 – adapted 64 cores from the Zen 3 family with their own code name Trento (instead of Milan). In addition, there are four GPU accelerators per processor in the form of AMD’s Instinct MI250X, i.e. a total of 37,632 cards with 75,264 GPUs or a good billion shader cores (14,080 per GPU, there are two chips on each card).
The 9.2 petabytes of main memory consists of equal parts fast HBM2e stack memory on the GPUs and DDR4 RAM coupled to the CPUs. Thanks to the Infinity Fabric connection, the Epyc and Instinct chips share memory access: they work together cache-coherently in a Unified Memory Architecture (UMA) for each node.
The computing nodes are connected to each other with the so-called Slingshot Interconnect, which the HPE division Cray developed.
Exascale in the west
Frontier is located at Oak Ridge National Laboratory (ORNL) and is the first exascale-class supercomputer that does not calculate in China – there are systems with Chinese technology and over 1 EFlops apparently since 2021. Actually, Aurora with Intel hardware should have this Title received, however, it keeps getting delayed due to Intel’s postponements.
As a further record, the supercomputers with AMD hardware are the most efficient high-performance systems: Frontier has a total power consumption of 21.1 megawatts at just over 52 gigaflops per watt, the mini offshoot Crusher aka Frontier TDS even at almost 63. The most efficient system with Nvidia GPUs is the “Scalable Module” of the South Korean SSC-21 in 6th place with just under 34 gigaflops per watt. However, A100 accelerators from the previous Ampere generation are still used here and no new hopper cards yet.
Research starts this year
US research teams will gain access to Frontier later this year for nuclear simulation, disease research and artificial intelligence development, among others.
In March 2022, the United States Department of Energy (DoE) admitted another delay, which led to the creation of the mini offshoot Crusher. Under the name Frontier TDS, this test system made it to number 29 in the current Top500 list of the world’s fastest supercomputers.
In March, those responsible feared that Frontier would not be commissioned until 2023. There were rumors that the delay was due to the scaling of HPE’s Slingshot Interconnect, which cables all racks together – mind you, with a total of 90 kilometers of cabling.
#Western #Exaflops #Supercomputer #Frontier #CPUs #GPUs #AMD