The history of supercomputers is intimately entwined with the wider history of all computers. Because the first computers were so expensive and required the skills and attentions of dedicated technicians, one computer would be used by many people, just like most supercomputers today. Such early supercomputers were often referred to as mainframes. At first, the time between running each program was taken up by the operator inputting a program, often by flicking switches in sequence. During these periods, the computer was effectively idle. To make better use of such machines, computer operators began to require users to submit programs prepared ‘offline’, so that as soon as one job finished, another could be started. This was called batch processing. The problem with batch processing is that assigning a run time and priority to each job meant that some of the more experimental work, or work deemed less important, might have to wait. This was understandably frustrating for scientists. Users shared processing time through terminals – essentially just keyboards and a screen.
When the emphasis shifted from these models of computer access to personal computing in the 1970s, makers of supercomputers continued to strive for increased performance. Weather simulations, nuclear research and astrophysics calculations still required big computers to be able to carry out complex computational tasks.