Supercomputers Speed Up Searches for Scientific Purposes at Several Centers

Two supercomputers have passed the exascale mark, a point in which computing is done in minutes to crunch data for complex problems ranging from uncovering the structure of the universe, discovering new molecules, and predicting global climate. Despite that, the majority are stuck at the same level as earlier in the year. According to IEEE Spectrum:

“Twice a year, Top500.org publishes a ranking of raw computing power using a value called Rmax, derived from benchmark software called Linpack. By that measure, it’s been a bit of a dull year. The ranking of the top nine systems are unchanged from June, with Japan’s Supercomputer Fugaku on top at 442,010 trillion floating point operations per second. That leaves the Fujitsu-built system a bit shy of the long-sought goal of exascale computing—one-thousand trillion 64-bit floating-point operations per second, or exaflops.”

Fugaku and its competitor the Summit supercomputer at Oak Ridge National Laboratory surpassed the HPL-AI benchmark, which measures a system’s performance using the lower-precision numbers—16-bits or less—common to neural network computing. “Using that yardstick, Fugaku hits 2 exaflops (no change from June 2021) and Summit reaches 1.4 (a 23 percent increase).”

But HPL-AI isn’t an ideal test. Now MLCommons, an industry organization using tests for AI systems of all sizes, released results from version 1.0 of its high-performance computing benchmarks, called MLPerf HPC, this week.

The three neural networks trialed were:

  • CosmoFlow, which uses the distribution of matter in telescope images to predict things about dark energy and other mysteries of the universe.
  • DeepCAM tests the detection of cyclones and other extreme weather in climate data.
  • OpenCatalyst, the newest benchmark, predicts the quantum mechanical properties of catalyst systems to discover and evaluate new catalyst materials for energy storage.

The tests help the systems determine strengths and weaknesses and how to reach their goals within programs. Some of the supercomputers they tested were at Argonne National Laboratories, The Swiss National Supercomputing Centre, Fujitsu and RIKEN, Helmholtz AI, Lawrence Berkeley National Laboratory, The (U.S.) National Center for Supercomputing Applications at the University of Illinois, Nvidia and Texas Advanced Computing Center.

read more at spectrum.ieee.org