HPC is very suitable for modeling complex physical phenomena such as the weather, the interactions of molecules in biology and industrial processes, astronomical calculations, and engineering designs – including trying to figure out the physics of nuclear fusion, which could help to secure humankind’s future energy supply. In many of these types of systems, the overall answers to the questions that scientists and engineers ask depend on a class of calculation called iteration. It basically means there are many steps in the calculation and the input of each step depends on the results of previous steps. Because of this, it makes more sense for the computer processors to be in one place in a supercomputer rather than distributed in a global grid, simply because the time taken to move the data around after each step. Calculations that are suited to grid on the other hand are termed ‘embarrassingly parallel’ because, although they require lots of computational steps, each step is effectively independent.
That said, supercomputers are also very good at working on embarrassingly parallel problems and can surpass grids in this area. But whereas grid is a collaborative effort, with many people supplying small amounts of computer power and expertise, HPC is centralised. Although it can seem expensive to governments and the public, and can require significant administration to run, there are also benefits to having centralised facilities. National governments can see what they’ve invested in, and the concentration of people needed to run such facilities can provide a ‘critical mass’ able to find innovative for ways to solve difficult problems. Countries also often compete to have the fastest computer in lists like the TOP500, which can be a source of national pride.