Amped Up: HPC Centers Ride A100 GPUs to Accelerate Science

amped-up:-hpc-centers-ride-a100-gpus-to-accelerate-science

Six supercomputer products and companies across the realm are among the first to adopt the NVIDIA Ampere architecture. They’ll spend it to lift science into the exascale period in fields from astrophysics to virus microbiology.

The high performance computing products and companies scattered across the U.S. and Germany will spend a total of nearly 13,000 A100 GPUs.

Together these GPUs pack bigger than 250 petaflops in height performance for simulations that spend 64-bit floating level math. For AI inference jobs that spend blended precision math and leverage the A100 GPU’s enhance for sparsity, they bring a whopping 8.07 exaflops.

Researchers will harness that horsepower to force science ahead in quite a bit of dimensions. They notion to simulate greater objects, educate and deploy deeper networks, and pioneer an rising hybrid field of AI-assisted simulations.

Argonne deployed one among the first NVIDIA DGX-A100 systems. Photo courtesy of Argonne National Laboratory.

For instance, Argonne’s researchers will perceive a COVID-19 vaccine by simulating a key section of a protein spike on a coronavirus that’s made up of as many as 1.5 million atoms.

The molecule “is a beast, nonetheless the A100 lets us urge simulations of these subsystems so we can know the vogue this virus infects folks,” said Arvind Ramanathan, a computational biologist at Argonne National Laboratory that will spend a cluster of 24 NVIDIA DGX A100 systems.

In assorted efforts, “we’re going to have the option to perceive tall enchancment in drug discovery by scanning millions and billions of substances at a time. And we would possibly well perceive issues we would possibly well by no skill perceive before, like how two proteins bind to every other,” he said.

A100 Locations AI within the Scientific Loop

“Worthy of this work is laborious to simulate on a laptop, so we spend AI to intelligently handbook the build and when we’re going to have the option to pattern next,” said Ramanathan.

It’s section of an rising trend of scientists the spend of AI to steer simulations. The GPUs then will tempo up the time to course of natural samples by “now not now not up to two orders of magnitude,” he added.

All over the country, the National Vitality Be taught Scientific Computing Heart (NERSC) is poised to alter into the most sharp of the first wave of A100 customers. The guts in Berkeley, Calif., is working with Hewlett Packard Endeavor to deploy 6,200 of the GPUs in Perlmutter, its pre-exascale map.

“All over NERSC’s science and algorithmic areas, now we beget increased performance by up to 5x when comparing a single V100 GPU to a KNL CPU node on our recent-period Cori map, and we demand even increased beneficial properties with the A100 on Perlmutter,” said Sudip Dosanjh, NERSC’s director.

Exascale Computing Group Works on Simulations, AI

A crew devoted to exascale computing at NERSC has outlined nearly 30 initiatives for Perlmutter that spend huge-scale simulations, recordsdata analytics or deep studying. Some initiatives mix HPC with AI, a lot like one the spend of reinforcement studying to control gentle source experiments. One other employs generative objects to breed costly simulations at high-vitality physics detectors.

Two of NERSC’s HPC functions already prototyped spend of the A100 GPU’s double-precision Tensor Cores. They’re seeing vital will enhance in performance over old period Volta GPUs.

Tool optimized for the 10,000-blueprint parallelism Perlmutter’s GPUs provide will most likely be ready to dawdle on future exascale systems, Christopher Daley, an HPC performance engineer at NERSC said in a talk at GTC Digital. NERSC helps nearly a thousand scientific functions in areas a lot like astrophysics, Earth science, fusion vitality and genomics.

“On Perlmutter, we desire compilers that enhance your total programming objects our customers need and demand — MPI, OpenMP, OpenACC, CUDA and optimized math libraries. The NVIDIA HPC SDK exams all of those boxes,” said Nicholas Wright, NERSC’s chief architect.

German Effort to Draw the Brain

AI would possibly well be the level of passion of one of the vital first functions for the A100 on a novel 70-petaflops map designed by France’s Atos for the Jülich Supercomputing Heart in western Germany.

One, called Deep Rain, goals to form rapidly, short-term weather predictions, complementing traditional systems that spend huge, comparatively dreary simulations of the atmosphere. One other mission plans to create an atlas of fibers within the human brain, assembled with deep studying from thousands of high-resolution 2D brain pictures.

The novel A100 map at Jülich moreover will wait on researchers push the facets of belief the sturdy forces binding quarks, the sub-atomic building blocks of subject. On the macro scale, a climate science mission will mannequin the Earth’s ground and subsurface water drift.

“Lots of these functions are constrained by memory,” said Dirk Pleiter, a theoretical physicist who manages a analysis crew in functions-oriented technology pattern at Jülich. “So, what’s incredibly sharp for us is the increased memory footprint and memory bandwidth of the A100,” he said.

The novel GPU’s capacity to urge double-precision math by up to 2.5x is yet any other characteristic researchers are concerned to harness. “I’m assured when of us stamp the alternatives of more compute performance, they’ll beget an spectacular incentive to spend GPUs,” he added.

Info-Hungry System Likes Swiftly NVLink

Some 230 miles south of Jülich, the Karlsruhe Institute of Technology (KIT) is partnering with Lenovo to provide a brand novel 17-petaflops map that will pack 740 A100 GPUs on an NVIDIA Mellanox 200 Gbit/s InfiniBand community. This is succesful of kind out tremendous challenges that comprise:

  • Atmospheric simulations on the kilometer scale for climate science
  • Be taught to battle COVID-19, including enhance for Folding@residence
  • Explorations of particle physics beyond the Higgs boson for the Colossal Hadron Collider
  • Be taught on next-period provides that would possibly well change lithium-ion batteries
  •  AI functions in robotics, language processing and renewable vitality

“We kind out recordsdata-intensive simulations and AI workflows, so we care for the third-period NVLink connecting the novel GPUs,” said Martin Frank, director of KIT’s supercomputing heart and a professor of computational science and math.

“We moreover look ahead to the multi-occasion GPU characteristic that effectively offers us up to 28 GPUs per node as yet any other of four — that will significantly again many of our functions,” he added.

Factual commence air Munich, the computer heart for the Max Planck Institute is increasing with Lenovo a tool called Raven-GPU, powered by 768 NVIDIA A100 GPUs. This is succesful of enhance work in fields like astrophysics, biology, theoretical chemistry and developed provides science. The analysis institute goals to beget Raven-GPU installed by the pause of the twelve months and is taking requests now for enhance porting functions to the A100.

Indiana System Counters Cybersecurity Threats

In the extinguish, Indiana College is building Colossal Pink 200, a 6 petaflops map expected to alter into the quickest university-owned supercomputer within the U.S. This is succesful of spend 256 A100 GPUs.

Launched in June, it’s among the first tutorial products and companies to adopt the Cray Shasta technology from Hewlett Packard Endeavor that others will spend in future exascale systems.

Colossal Pink 200 will notice AI to counter cybersecurity threats. It moreover will kind out tremendous challenges in genetics to wait on enable personalised healthcare as effectively as work in climate modeling, physics and astronomy.

Photo at top: Shyh Wang Hall at UC Berkeley would possibly well be the dwelling of NERSC’s Perlmutter supercomputer.

Leave a comment

Your email address will not be published.


*