Technologies Driving The Future: Much like the FIFA world rankings of the top football teams or the songs skipping up and down the music charts each week. The most powerful computers around the globe are also index in what is call the Top500 list. For two straight years, Japan’s Fugaku dominated the supercomputing charts, boasting a computing speed of 442 petaFLOPS. But a new challenger—the 1.1-exaFLOPS Frontier system at the Oak Ridge National Laboratory in the US—has made its debut atop the latest rankings released in May 2022. Inching Fugaku down to the number two spot.
Besides the top places. The rest of the Top500 has also seen plenty of shuffling around during the list’s biannual publication. Such movement in the rankings is a testament to the breakneck pace of technological advancement in the high performance computing (HPC) sector. By providing high-speed calculations on vast amounts of data. HPC systems not only stand at the frontiers of the tech industry but also serve as enabling tools for tackling complex problems in many other fields. For example. Scientists can use such technologies to uncover biomedical breakthroughs from clinical data or model the properties of novel materials more efficiently and accurately.
Given the ever-expanding value of these innovations.
It comes as no surprise that researchers and industry leaders alike continue to challenge the ceiling for supercomputing—from components to clusters. Minor tweaks to significant performance upgrades. As the promising potential of HPC relies on many moving parts. Here are the technologies and trends that are laying the groundwork for building even more powerful and accessible supercomputing systems.
Revolutionizing machine intelligence
With the surge of data produc on the daily. Artificial intelligence (AI) and data analytics tools are increasingly being us to extract relevant information and build models. Which can then be us to guide decision making or optimize systems. HPC is vital for enhancing AI technologies. Including machine learning (ML) and deep learning (DL) systems built on neural networks that emulate the human brain’s processing patterns.
Instead of analyzing data according to a predetermined set of rules. DL algorithms detect patterns and learn from a set of training data. And later apply those learned rules to new data or even to an entirely new problem. DL performance often depends on the amount and quality of data available—making it computationally expensive and time-consuming—but supercomputers can accelerate these learning phases and scour through more data to improve the resulting model.
In the medical sphere. For example.
Computational models simulate how intricate molecular networks interact to drive disease progression. Such discoveries can then spark novel ways to detect and treat complex disorders such as cancer and cardiometabolic conditions. To investigate therapeutic targets against SARSCoV-2. The virus that causes COVID-19. Researchers from Chulalongkorn University in Thailand conducted molecular dynamics simulations using TARA.
the 250-teraFLOPS supercomputing cluster housed at National Science and Technology Development Agency’s Supercomputer Center. Through these simulations. The team mapped the interactions between a class of inhibitors and a protein known to be important for viral replication. Generating new insights into how such drugs can be better designed to bind to the protein and potentially suppress SARS-CoV-2.
The power of HPC can also be harnessed for weather predictions and climate change monitoring, with South Korea building high-resolution and high accuracy forecast models through its National Center for Meteorological Supercomputer. The Korea Meteorological Administration refreshed its HPC resources just last year to meet the extensive computational demands of climate modeling and AI analytics. Installing Lenovo ThinkSystem SD650 V2 servers built on third-gen Intel Xeon Scalable Processors. Clocking in at 50 petaFLOPS. The new cluster is eight times faster and four times more energy efficient than its predecessor.
While supercomputing no doubt enables AI workloads, these smart systems can in turn be useful for optimizing HPC data centers. Such as by evaluating network configurations for enhanced security. By monitoring server health, predictive algorithms can also alert users to potential equipment failures. Helping reduce downtime and improve efficiency to support continuous HPC tasks.
A matrix of chips
HPC-powered AI may cover the software side of supercomputing, but the hardware is just as important. Advances in this space depend on innovations in developing processors or chips—pushing the bounds for how many operations that can be completed in as short a timeframe as possible.
Perhaps the most familiar of these chips are the central processing units (CPUs), which can easily run simple models that process a relatively smaller volume of data. They typically have access to more memory and are designed to perform several smaller tasks simultaneously, making them useful for frequently repeated tasks but not for complex and lengthy work like training models.
Packing in more CPU nodes increases computing capacity, but just adding more units to the system is hardly efficient nor practical. To handle heavy ML workloads, accelerators in the form of graphical processing units (GPUs) and tensor processing units (TPUs) are critical to scaling up HPC resources—and in fact are the defining components that separate supercomputers from their lower-performing counterparts.
As the name suggests
GPUs excel at rendering graphics—no choppy videos or lagging frame rates in sight. But more than that, they are built to perform calculations in the nick of time, since smoothening out those geometric figures and transitions hinges on completing successive operations as quickly as possible. This speed enables GPUs to process larger models and perform data-intensive ML tasks.
TPUs push these computing capabilities a step further by taking care of matrix calculations more commonly found in neural networks for DL models than in graphical rendering. They are integrated circuits consisting of two units, each designed to run different types of operations. The unit for matrix multiplications uses a mixed precision format, shifting between 16 bits for the calculations and 32 bits for the results.
Operations run much faster on 16 bits and use up less memory, but keeping some parts of the model on 32 bits can help reduce errors upon executing the algorithm. With such an architecture, matrix calculations can be completed on just one TPU core rather than be spread out on multiple GPU nodes—leading to a significant boost in computing speed and power without sacrificing accuracy. In the race to design better processors, chip manufacturing companies from all over the world are constantly exploring novel engineering methods and applying the latest research in materials science to elevate the performance of these critical HPC components.
Accessing HPC resources on demand
Supercomputing systems are hardly cheap—requiring significant financial, spatial and energy resources to build and maintain, not to mention the technical know-how to use them effectively. These costs can prove a barrier to widespread HPC adoption. Although HPC infrastructure is typically installed as in-house data centers, they have also been deployed on the cloud in recent years to increase access to these innovations.
Cloud computing involves delivering tech services over the internet, ranging from analytical processes to storage space. Called HPC as a Service (HPCaaS), this distribution of supercomputing resources across the cyberspace provides increased flexibility and scalability compared to on-site centers alone.
With supercomputing transitioning from academia to industry, HPCaaS can serve as an important bridge to place these powerful resources within the reach of more end users, from finance to oil and gas to automotive sectors. Through optimized scheduling strategies and allocation of resources, these systems can accommodate such diverse industry-specific workloads and encourage stronger collaborations over shared HPC capabilities.
In April this year
Japanese infocomms company Fujitsu—which jointly developed Fugaku alongside the RIKEN research institute—launched its HPCaaS portfolio with a vision to further spur technological disruption across industries. Through the cloud, commercial organizations can access the computational resources of Fujitsu’s Supercomputer PRIMEHPC FX1000 servers, which run on ARM A64X processors and are supplemented by software for AI and ML workloads. These chips, which are also found in the Fugaku system, are not only high-end performers but are also very energy efficient.
To further encourage partnerships between academia and industry, Fujitsu is again working with the RIKEN research institute to ensure compatibility between the HPCaaS portfolio and the Fugaku system, granting more users and organizations the opportunity to use the region’s most powerful supercomputer.
The HPC service’s official release in the Japanese market is slated for October this year, and an international roll-out is also plann for the near future. By then. Fujitsu would also become the country’s first-ever HPCaaS solutions provider. Rivaling the infrastructure offerings of global companies including Google Cloud and IBM.
Flexible HPC consumption models will be key to bridging the digital divide, especially in Asia where technological progress is uneven and heterogeneous. By sharing top-notch resources. Cross-border collaborations and the democratization of supercomputing can bring innovative ideas to life and carve new research directions with greater agility.
To the exascale and beyond
The arrival of Frontier marks an exciting milestone for the HPC community—breaking the exascale barrier. Prior to Frontier. The world’s top supercomputers lived in the petascale when measured at 64-bit precision. With one petaFLOPS equivalent to a quadrillion (1015) calculations per second.
These systems can execute extremely complex modeling and have advanced scientific discoveries at a swift pace. Fugaku. For example. Has been us to map genetic data and predict treatment effectiveness for cancer patients. Simulate the fluid dynamics of the atmosphere and oceans at higher resolutions. And develop a real-time prediction model for tsunami flooding.
Exascale computing could pave the way for even bigger breakthroughs. Offering more realistic simulations and faster speeds at a quintillion calculations per second—that’s 18 zeroes! This boost in speed can drive a diverse array of applications and fundamental research endeavors. Such as understanding the complex physical and nuclear forces that shape how the universe works.
From sustainability to advanced manufacturing. Scientists can also use these HPC resources to build more exact models of the Earth’s water bodies. Or dive deep into the nanoparticles and the optical and chemical properties of novel materials.
The chemical space is an especially exciting realm to explore.
Acting as the conceptual territory containing every possible chemical compound. Estimates are pegged at 10180 compounds—more than double the number of atoms inhabiting our universe. And a tantalizing figure relative to the 260 million substances documented so far in the American Chemical Society’s CAS Registry.
Exascale computing can equip scientists with powerful new means to search every nook and cranny of this chemical space. Whether for discovering potential drug molecules. Light-absorbing compounds for solar cells or nanomaterials for more efficient water filters. More compute resources can also support more distribut access and increas adoption of HPC. Following in the footsteps of how the petascale systems were shar within and across borders.
While Asia may not yet have an exascale supercomputer on its soil. Both Fugaku and China’s Sunway have hit the exaFLOPS benchmark at 32 bits. With innovative minds at the forefront of the region’s tech sector. Achieving the same feat at the 64-bit level is on the horizon. Boding well for the future of HPC and its applications in Asia and beyond.