CPUs or central processing units have long been the powerhouses of computer architecture. Graphics, which required CPU resources, were once thought to be a frivolous waste of computer power. In the 1970s, Alan Kay designed systems at Xerox that were highly graphics-intensive, taking up to 50% of the computer time. His work would influence others designing graphical user interfaces for personal computers, but largely these continued to use the CPU for display purposes.
In 1985, the Commodore Amiga was the first commercial computer to contain a dedicated GPU. This and later GPUs took pressure off the CPU. In the 1990s, games consoles and PC ‘graphics cards’ push the envelope with dedicated 3D graphics acceleration. These developments were crucial for feeding back into supercomputer design, because 3D accelerated GPUs were excellent for parallel computations – being able to calculate more than one thing at once. This came at a time when computer scientists had made great leaps forward in the message-passing interface, which is central to parallel computation in supercomputing.
In the mid 2000s, more and more supercomputers were using banks of GPUs for their calculations, often surpassing their CPU-based counterparts in terms of speed.
MIC – a new term introduced by Intel and standing for many integrated core architecture – is a development of the multicore chips they have been producing commercially for some time. However, rather than 2 or 4 cores, which are the most popular in laptop and desktop computers, the new designs include 50 cores. This could swing the use of processors in supercomputers back towards CPUs, although that remains to be seen.