What Is an Exaflop?

Personal computers are crunching a lot more figures than ever to crack the most sophisticated challenges of our time — how to cure conditions like COVID and most cancers, mitigate weather alter and a lot more.

These and other grand troubles ushered computing into today’s exascale period when major performance is frequently measured in exaflops.

So, What is an Exaflop?

An exaflop is a evaluate of functionality for a supercomputer that can determine at the very least 1018 or one particular quintillion floating level operations for each 2nd.

In exaflop, the exa- prefix implies a quintillion, which is a billion billion, or just one followed by 18 zeros. Equally, an exabyte is a memory subsystem packing a quintillion bytes of information.

The “flop” in exaflop is an abbreviation for floating position operations. The level at which a program executes a flop in seconds is calculated in exaflop/s.

Floating issue refers to calculations made in which all the quantities are expressed with decimal points.

one,000 Petaflops = an Exaflop

The prefix peta- suggests 1015, or a single with 15 zeros guiding it. So, an exaflop is a thousand petaflops.

The exaflop in historical context

To get a feeling of what a heady calculation an exaflop is, imagine a billion people, every keeping a billion calculators. (Obviously, they’ve received significant hands!)

If they all strike the equal indicator at the exact same time, they’d execute a person exaflop.

Indiana College, residence to the Massive Crimson 200 and various other supercomputers, places it this way: To match what an exaflop computer can do in just just one second, you’d have to conduct a person calculation each individual 2nd for 31,688,765,000 decades.

A Temporary Historical past of the Exaflop

For most of supercomputing’s historical past, a flop was a flop, a actuality that’s morphing as workloads embrace AI.

Persons utilized numbers expressed in the best of quite a few precision formats, named double precision, as outlined by the IEEE Typical for Floating Point Arithmetic. It is dubbed double precision, or FP64, since each and every range in a calculation calls for 64 bits, info nuggets expressed as a zero or a single. By contrast, solitary precision takes advantage of 32 bits.

Double precision utilizes these 64 bits to be certain each quantity is correct to a very small portion. It’s like indicating 1.0001 one.0001 = 2.0002, as an alternative of one one = 2.

The format is a excellent suit for what made up the bulk of the workloads at the time — simulations of everything, from atoms to airplanes, that need to guarantee their benefits appear near to what they stand for in the authentic earth.

So, it was pure that the LINPACK benchmark, aka HPL, that measures functionality on FP64 math became the default measurement in 1993, when the Top rated500 record of world’s most potent supercomputers debuted.

The Big Bang of AI

A 10 years in the past, the computing field listened to what NVIDIA CEO Jensen Huang describes as the massive bang of AI.

This potent new form of computing started out displaying important results on scientific and small business apps. And it normally takes gain of some  extremely different mathematical methods.

Deep finding out is not about simulating actual-environment objects it is about sifting via mountains of details to locate patterns that enable clean insights.

Its math needs superior throughput, so performing lots of, quite a few calculations with simplified figures (like 1.01 in its place of 1.0001) is considerably improved than carrying out less calculations with far more advanced ones.

That is why AI takes advantage of lower precision formats like FP32, FP16 and FP8. Their 32-, 16- and 8-little bit numbers let customers do much more calculations faster.

Mixed Precision Evolves

For AI, making use of 64-bit numbers would be like having your complete closet when likely away for the weekend.

Locating the perfect decrease-precision technique for AI is an lively region of investigation.

For example, the initial NVIDIA Tensor Main GPU, Volta, used blended precision. It executed matrix multiplication in FP16, then accrued the outcomes in FP32 for increased accuracy.

Hopper Accelerates With FP8

Extra a short while ago, the NVIDIA Hopper architecture debuted with a lessen-precision system for teaching AI that is even faster. The Hopper Transformer Motor immediately analyzes a workload, adopts FP8 any time achievable and accumulates outcomes in FP32.

When it arrives to the significantly less compute-intensive position of inference — operating AI styles in creation — significant frameworks these kinds of as TensorFlow and PyTorch support eight-bit integer figures for quick effectiveness. That is since they never need to have decimal points to do their work.

The superior information is NVIDIA GPUs assistance all precision formats (over), so people can accelerate each workload optimally.

Last 12 months, the IEEE P3109 committee started off perform on an field conventional for precision formats employed in machine discovering. This do the job could choose an additional yr or two.

Some Sims Shine at Decrease Precision

While FP64 stays well-liked for simulations, a lot of use lower-precision math when it delivers valuable success more quickly.

Factors for HPC app performance vary
HPC applications vary in the aspects that effects their functionality.

For illustration, scientists operate in FP32 a well-known simulator for motor vehicle crashes, LS-Dyna from Ansys. Genomics is a different subject that tends to want reduce-precision math.

In addition, numerous regular simulations are commencing to adopt AI for at the very least part of their workflows. As workloads shift to AI,  supercomputers need to have to help decreased precision to run these rising applications effectively.

Benchmarks Evolve With Workloads

Recognizing these adjustments, researchers which include Jack Dongarra — the 2021 Turing award winner and a contributor to HPL — debuted HPL-AI in 2019. It is a new benchmark that is far better for measuring these new workloads.

“Mixed-precision methods have come to be ever more critical to enhance the computing effectiveness of supercomputers, the two for standard simulations with iterative refinement methods as effectively as for AI purposes,” Dongarra stated in a 2019 blog site. “Just as HPL enables benchmarking of double-precision abilities, this new strategy centered on HPL permits benchmarking of blended-precision abilities of supercomputers at scale.”

Thomas Lippert, director of the Jülich Supercomputing Center, agreed.

“We’re employing the HPL-AI benchmark mainly because it is a good measure of the mixed-precision do the job in a growing quantity of our AI and scientific workloads — and it reflects exact 64-bit floating issue results, far too,” he said in a blog site posted past yr.

Today’s Exaflop Programs

In a June report, 20 supercomputer facilities all over the entire world noted their HPL-AI results, a few of them delivering extra than an exaflop.

One particular of individuals techniques, a supercomputer at Oak Ridge National Laboratory, also exceeded an exaflop in FP64 general performance on HPL.

Exaflop results on HPL-AI
A sampler of the June 2022 HPL-AI success.

Two a long time ago, a extremely unconventional system was the to start with to hit an exaflop. The group-sourced supercomputer assembled by the Folding@residence consortium passed the milestone just after it set out a simply call for assist fighting the COVID-19 pandemic and was deluged with donated time on much more than a million computers.

Exaflop in Principle and Apply

Considering that then, quite a few organizations have set up supercomputers that deliver much more than an exaflop in theoretical peak efficiency. It’s worthy of noting that the Top500 listing reports both equally Rmax (real) and Rpeak (theoretical) scores.

Rmax is just the finest general performance a laptop truly demonstrated.

Rpeak is a system’s major theoretical general performance if anything could run at its highest achievable degree, something that nearly in no way seriously happens. It’s usually calculated by multiplying the number of processors in a technique by their clock pace, then multiplying the outcome by the number of floating level functions the processors can carry out in 1 2nd.

So, if an individual claims their technique can do an exaflop, consider asking if which is making use of Rmax (real) or Rpeak (theoretical).

A lot of Metrics in the Exaflop Age

It is a further just one of the lots of nuances in this new exascale era.

And it’s really worth noting that HPL and HPL-AI are artificial benchmarks, indicating they measure performance on math routines, not actual-world programs. Other benchmarks, like MLPerf, are primarily based on serious-environment workloads.

In the stop, the finest measure of a system’s effectiveness, of system, is how properly it operates a user’s applications. That’s a evaluate not based on exaflops, but on ROI.

Leave a comment

Your email address will not be published.