What do you think of e-ScienceCity? Click here!




HPC and the future

Much current work is based on integrating the power of supercomputers into e-infrastructure networks, but there are still benefits to having a cluster of processors together in one place and acting as a supercomputer. Maintenance and reliability, in addition to minimizing latency, mean that HPCs are unlikely to disappear any time soon.

The next leap in HPC will be exascale computing. The ‘exa’ bit describes the magnitude or scale of the computations – an Exaflop is 1,000,000,000,000,000,000 flops, or a million, million flops. Supercomputers today are at the petaflop, level (that’s 1 with 15 zeros after it), so this means a jump of a thousand times in speed to get to exascale. For this jump to be cost-effective and ecologically sound (see: ‘Is HPC green?’) this has to be done at no more than a tenfold increase in electrical power draw. While effectively the same computational increase was achieved in the 1990s with the jump from terascale to petascale, there are challenges in both hardware and software to jumping to exascale, and generally scientists agree that new technologies are needed to truly achieve exascale computing. One challenge in massive parallelism, which means that hundreds of millions of processing cores may be required for exascale systems. This will in turn lead to a  lot of data, and the associated storage requirements. Today’s hard disks are not an efficient way of storing data.

The benefits will far outweigh the costs, opening up new avenues for ensemble computing, where many similar calculations are performed effectively simultaneously in order to calculate statistical likelihoods of outcomes in complex systems. And of course, whatever is happening for HPC will filter down to consumer systems, notably supplying more reliable cloud-based services to keep our lives and our economies ticking along.



Sitemap