The
Central Processing Unit (
CPU) or the
processor is the portion of a computer system that carries out the instructions of a
computer program,
and is the primary element carrying out the computer's functions. This
term has been in use in the computer industry at least since the early
1960s
[1].
The form, design and implementation of CPUs have changed dramatically
since the earliest examples, but their fundamental operation remains
much the same.
Early CPUs were custom-designed as a part of a larger, sometimes one-of-a-kind, computer. However, this costly method of
designing
custom CPUs for a particular application has largely given way to the
development of mass-produced processors that are made for one or many
purposes. This standardization trend generally began in the era of
discrete
transistor mainframes and
minicomputers and has rapidly accelerated with the popularization of the
integrated circuit (IC). The IC has allowed increasingly complex CPUs to be designed and manufactured to tolerances on the order of
nanometers.
Both the miniaturization and standardization of CPUs have increased the
presence of these digital devices in modern life far beyond the limited
application of dedicated computing machines. Modern microprocessors
appear in everything from
automobiles to
cell phones and children's toys.
History
EDVAC, one of the first electronic stored program computers.
Computers such as the
ENIAC
had to be physically rewired in order to perform different tasks, these
machines are thus often referred to as "fixed-program computers." Since
the term "CPU" is generally defined as a software (computer program)
execution device, the earliest devices that could rightly be called
CPUs came with the advent of the stored-program computer.
The idea of program computer was already present in the design of
J. Presper Eckert and
John William Mauchly's
ENIAC, but was initially omitted so the machine could be finished
sooner. On June 30, 1945, before ENIAC was even completed,
mathematician
John von Neumann distributed the paper entitled "
First Draft of a Report on the EDVAC." It outlined the design of a stored-program computer that would eventually be completed in August 1949
[2].
EDVAC was designed to perform a certain number of instructions (or
operations) of various types. These instructions could be combined to
create useful programs for the EDVAC to run. Significantly, the
programs written for EDVAC were stored in high-speed
computer memory
rather than specified by the physical wiring of the computer. This
overcame a severe limitation of ENIAC, which was the considerable time
and effort required to reconfigure the computer to perform a new task.
With von Neumann's design, the program, or software, that EDVAC ran
could be changed simply by changing the contents of the computer's
memory.
[3]
While von Neumann is most often credited with the design of the
stored-program computer because of his design of EDVAC, others before
him, such as
Konrad Zuse, had suggested and implemented similar ideas. The so-called
Harvard architecture of the
Harvard Mark I, which was completed before EDVAC, also utilized a stored-program design using
punched paper tape
rather than electronic memory. The key difference between the von
Neumann and Harvard architectures is that the latter separates the
storage and treatment of CPU instructions and data, while the former
uses the same memory space for both. Most modern CPUs are primarily von
Neumann in design, but elements of the Harvard architecture are
commonly seen as well.
As a
digital
device, a CPU is limited to a set of discrete states, and requires some
kind of switching elements to differentiate between and change states.
Prior to commercial development of the transistor,
electrical relays and
vacuum tubes
(thermionic valves) were commonly used as switching elements. Although
these had distinct speed advantages over earlier, purely mechanical
designs, they were unreliable for various reasons. For example,
building
direct current sequential logic circuits out of relays requires additional hardware to cope with the problem of
contact bounce.
While vacuum tubes do not suffer from contact bounce, they must heat up
before becoming fully operational, and they eventually cease to
function due to slow contamination of their cathodes that occurs in the
course of normal operation. If a tube's vacuum seal leaks, as sometimes
happens, cathode contamination is accelerated. Usually, when a tube
failed, the CPU would have to be diagnosed to locate the failed
component so it could be replaced. Therefore, early electronic (vacuum
tube based) computers were generally faster but less reliable than
electromechanical (relay based) computers.
Tube computers like
EDVAC tended to average eight hours between failures, whereas relay computers like the (slower, but earlier)
Harvard Mark I failed very rarely
[1].
In the end, tube based CPUs became dominant because the significant
speed advantages afforded generally outweighed the reliability
problems. Most of these early synchronous CPUs ran at low
clock rates
compared to modern microelectronic designs (see below for a discussion
of clock rate). Clock signal frequencies ranging from 100
kHz to 4 MHz were very common at this time, limited largely by the speed of the switching devices they were built with.
[edit] Discrete transistor and Integrated Circuit CPUs
The design complexity of CPUs increased as various technologies
facilitated building smaller and more reliable electronic devices. The
first such improvement came with the advent of the
transistor.
Transistorized CPUs during the 1950s and 1960s no longer had to be
built out of bulky, unreliable, and fragile switching elements like
vacuum tubes and
electrical relays. With this improvement more complex and reliable CPUs were built onto one or several
printed circuit boards containing discrete (individual) components.
During this period, a method of manufacturing many transistors in a compact space gained popularity. The
integrated circuit (
IC) allowed a large number of transistors to be manufactured on a single
semiconductor-based
die, or "chip." At first only very basic non-specialized digital circuits such as
NOR gates were miniaturized into ICs. CPUs based upon these "building block" ICs are generally referred to as "small-scale integration" (
SSI) devices. SSI ICs, such as the ones used in the
Apollo guidance computer,
usually contained transistor counts numbering in multiples of ten. To
build an entire CPU out of SSI ICs required thousands of individual
chips, but still consumed much less space and power than earlier
discrete transistor designs. As microelectronic technology advanced, an
increasing number of transistors were placed on ICs, thus decreasing
the quantity of individual ICs needed for a complete CPU.
MSI and
LSI (medium- and large-scale integration) ICs increased transistor counts to hundreds, and then thousands.
In 1964
IBM introduced its
System/360
computer architecture which was used in a series of computers that
could run the same programs with different speed and performance. This
was significant at a time when most electronic computers were
incompatible with one another, even those made by the same
manufacturer. To facilitate this improvement, IBM utilized the concept
of a
microprogram (often called "microcode"), which still sees widespread usage in modern CPUs
[4]. The System/360 architecture was so popular that it dominated the
mainframe computer market for decades and left a legacy that is still continued by similar modern computers like the IBM
zSeries. In the same year (1964),
Digital Equipment Corporation (DEC) introduced another influential computer aimed at the scientific and research markets, the
PDP-8. DEC would later introduce the extremely popular
PDP-11
line that originally was built with SSI ICs but was eventually
implemented with LSI components once these became practical. In stark
contrast with its SSI and MSI predecessors, the first LSI
implementation of the PDP-11 contained a CPU composed of only four LSI
integrated circuits
[5].
Transistor-based computers had several distinct advantages over
their predecessors. Aside from facilitating increased reliability and
lower power consumption, transistors also allowed CPUs to operate at
much higher speeds because of the short switching time of a transistor
in comparison to a tube or relay. Thanks to both the increased
reliability as well as the dramatically increased speed of the
switching elements (which were almost exclusively transistors by this
time), CPU clock rates in the tens of megahertz were obtained during
this period. Additionally while discrete transistor and IC CPUs were in
heavy usage, new high-performance designs like
SIMD (Single Instruction Multiple Data)
vector processors began to appear. These early experimental designs later gave rise to the era of specialized
supercomputers like those made by
Cray Inc.