Throughout history humans have relied mainly on their brains to perform calculation; in other words, they were the computers [Boyer 1989]. As civilization advanced, a variety of computing tools were invented that aided, but did not replace, manual computation.
The earliest peoples used their fingers, pebbles (stone), or tally sticks for counting purposes.
The Latin words digitus meaning “finger” and calculus meaning “pebble” have given us digital and calculate and indicate the ancient origins of these computing concepts.
The early computational aids that were widely used until quite recently are:
- The abacus
- And slide rule.
The origins of the abacus are disputed, as many different cultures have been known to have used similar tools. It is known to have existed in Babylonia and in China, with invention to have taken place between 1000 BCE and 500 BCE. The first abacus was almost certainly based on a flat stone covered with sand or dust. Lines were drawn in the sand and pebbles used to aid calculations. From this, a variety of abaci were developed; the most popular were based on the bi-quinary system, using a combination of two bases (base-2 and base-5) to represent decimal numbers.
The brain versus the computer:
Consider the actions involved in a manual calculation using pencil and paper – for an example, filling out an income tax return. The purpose of the paper is information storage.
The information stored can include a list of instructions – more formally called a program, algorithm, or procedure – to be followed in carrying out the calculation, as well as the numbers or data to be used.
During the calculation intermediate results and ultimately the final results are recorded on the paper. The data processing takes place in the human brain, which serves as the (central) processor.
The brain performs two distinct functions: a control function that interprets the instructions and ensures that they are performed in the proper sequence and an executive function that performs specific steps such as addition, subtraction, multiplication and division. A pocket calculator often serves as an aid to the brain.
A computer has several key components which are:
The main memory corresponds to the paper used in the manual calculation. Its purpose is to store instructions and data.
The computer’s brain is its central processing unit (CPU).
CPU contains a program control unit (also known as an instruction unit) whose function is to fetch instructions from memory and interpret them.
An arithmetic logic unit (ALU), which is a part of CPU’s data processing or execution unit, carries out the instructions.
Calculating machines capable of performing the elementary operations of arithmetic (addition, subtraction, multiplication and division) appeared in the 16th century and perhaps earlier (Randell 1982; Augarten 1984).
The French philosopher Blaise Pascal (1623 - 62) invented an early and influential mechanical calculator that could add and subtract decimal numbers.
In Germany, Gottfried Leibniz (1646 - 1716) extended Pascal’s design to one that could also perform multiplication and division.
Mechanical computing devices such as these remained academic curiosities until the 19th century, when the commercial production of mechanical four function calculators began.
Various attempts were made to build general purpose programmable computers from the same mechanical devices used in calculators. This technology posed some daunting (discouraging) problems, and they were not satisfactorily solved until the introduction of electronic computing techniques in the mid-20th century.
Babbage’s Difference Engine
In the 19th century Charles Babbage designed the first computers to perform multistep operations automatically, that is, without a human intervening in every step [Morrison and Morrison 1961].
Again the technologies were entirely mechanical. Babbage’s first computing machine, which he called the Difference Engine, was intended to compute and print mathematical tables automatically, thereby avoiding the many errors occurring in tables that are computed and typeset by hand. The Difference Engine performed only one arithmetic operation: addition. However, the method of (finite) differences embodied (alive) in the Difference Engine can calculate many complex and useful functions by means of addition alone.
Babbage constructed a small portion of his first Difference Engine in 1832, which served as a demonstration prototype. He later designed an improved version (Difference Engine No. 2), which was to handle seventh – order polynomials and have 31decimal digits of accuracy. This machine has difficulty of fabricating its 4000 or so high-precision mechanical parts and also complexity of 3-ton machine can be appreciated.
The Analytical Engine
Another reason for Babbage’s failure to complete his Difference Engine was that he conceived of a much more powerful computing machine that he called the Analytical Engine. This machine is considered to be the first general purpose programmable computer ever designed.
A mechanical computer has two serious drawbacks:
Its computing speed is limited by the inertia of its moving parts, and the transmission of digital information by mechanical means is quite unreliable.
In an electronic computer, on the other hand, the moving parts are electron, which can be transmitted and processed reliably at speeds approaching that of light (300,000 km/s). Electronic devices such as the vacuum tube of electronic value, which was developed in the early 1900s, permit the processing and storage of digital signals at speeds far exceeding those of any mechanical device.
First Generation 1944 to 1958 – Vacuum Tubes
Second Generation 1959 to 1963 – Transistor
Third Generation 1964 to 1970 – Integrated Circuit (IC)
Fourth Generation 1971 to Now –
Large Scale Integration (LSI) or
Very Large Scale Integration (VLSI)
Fifth Generation – Artificial Intelligence (AI)
The earliest attempt to construct an electronic computer using vacuum tubes appears to have been made in the late 1930s by John V. Atanasoff (1903 - 95) at Iowa State University (Randell 1982). This special-purpose machine was intended for solving linear equations, but it was never completed.
The first widely known general-purpose electronic computer was the Electronic Numerical Integrator and Calculator (ENIAC) that John W. Mauchly (1907 - 80) and J. Presper Eckert (1919 - 95) built at the University of Pennsylvania. Like Babbage’s Difference Engine, a motivation for the ENIAC was the need to construct mathematical tables automatically – this time ballistic tables for the U.S. Army.
Work on the ENIAC began in 1943 and was completed in 1946. It was an enormous machine weighting about 30 tones and containing more than 18,000 vacuum tubes. It was also substantially faster than any previous computer. While the Harvard Mark I required about 3 s to perform a 10-digit multiplication, the ENIAC required only 3 ms.
The idea of storing programs and their data in the same high-speed memory – the stored program concept – is attributed to the ENIAC’s designers, notably the Hungarian-born mathematician John von Neumann (1903-57) who was a consultant of the ENIAC project. The concept was first published in a 1945 proposal by von Neumann for a new computer, the Electronic Discrete Variable Computer (EDVAC).
In 1947 von Neumann and his colleagues began to design a new
Stored-program electronic computer, now referred to as the IAS
computer, at the Institute for Advanced Studies in Princeton. Like
the EDVAC, it had the general structure as in figure 1.2., with a
CPU for executing instruction, a memory for storing active programs, a secondary memory for backup storage, and miscellaneous input - output equipment. Unlike the EDVAC, however, the IAS machine was designed to process all bits of a binary number simultaneously or in parallel. Several reports describing the IAS computer were published [Burks, Goldstine, and von Neumann 1946] and had far-reaching influence. In its overall design the IAS is quite modern, and it can be regarded as the prototype of most subsequent general purpose computers. Because of its pervasive (determined ) influence, we will examine the IAS computer more detail later on.
In 1947 Eckert and Mauchly formed Eckert-Mauchly Corp. to manufacture computers commercially. There first successul product was the Universal Automatic Computer (UNIVAC) delivered in 1951. IBM, which had earlier constructed the Harvard Mark I, introduced its first electronic stored-program computer, the 701, in 1953. Besides their use of vacuum tubes in the CPU, first-generation computers experimented with various technologies for main and secondary memory. The Whirlwind introduced the ferrite-core memory in which a bit of information was stored in magnetic form on a tiny ring of magnetic material. Ferrite cores remained the principal technology for main memories until the 1970s.
The earliest computer had their instructions written in binary code known as machine language that could be executed directly. An instruction in machine language meaning “add the contents of two memory locations” might take the form
Machine-language programs are extremely difficult for humans to write and so are very error-prone. A substantial improvement is obtained by allowing operations and operand addresses to be expressed in an easily understood symbolic form such as
ADD X1, X2
This symbolic format is referred to as an assembly language (1950s) as computer programs were growing in size and complexity.
An assembly language requires a special “system” program (an assembler) to translate it into machine language before it can be executed.
First-generation computers were supplied with almost no system software; often little more than an assembler was available to the user.
The IAS Computer: It is instructive to examine the design of the Princeton IAS computer. Because of the size and high cost of the CPU’s electronic hardware, the designers made every effort to keep the CPU and therefore its instruction set, small and simple. Cost also heavily influenced the design of the memory subsystem. Because fast memories were expensive, the size of the main memory (initially 1K words but expandable to 4K) was less than most users would have wished. Consequently, a larger (16K words) but cheaper secondary memory based on an electromechanical magnetic drum technology was provided for bulk storage. Essentially similar cost-performance considerations remain central to computer design today, despite vast changes over the years in the available technologies and their actual costs.
The basic unit of information in the IAS computer is a 40-bit word, which is the standard packet of information stored in a memory location or transferred in one step between the CPU and the main memory M. Each location in M can be used to store either a single 40-bit number or else a pair of 20-bit instructions. The IAS’s number format is fixed-point, meaning that it contains an implicit (understood) binary point in some fixed position. Numbers are usually treated as signed binary fractions lying between -1 and +1, but they can also be interpreted as integers. Examples of the IAS’s binary number format are:
01101000000 0000000000 0000000000 0000000000 = +.8125
10011000000 0000000000 0000000000 0000000000 = -.8125
Number that lie outside the range ±1 must be suitably scaled for processing by IAS.
An IAS instruction consists of an 8-bit opcode (operation code) OP followed by a 12-bit address A that identifies one of up to 212 = 4K 40-bit words stored in M. The IAS computer thus has a one-address instructions format, which we represent symbolically as
The IAS have two key aspects which are:
The CPU contains a small set of high speed storage devices called registers which serve as implicit storage locations for operands and results.
A program’s instructions are stored in M in approximately the sequence in which they are executed.
IAS and other first generation computers introduced many features that are central to later computers: the use of a CPU with a small set of registers, a separate main memory for instruction and data storage, and an instruction set with a limited range of operations and addressing capabilities. Indeed the term von Neumann computer has become synonymous with a computer of conventional design.
Computer hardware and software evolved rapidly after the
introduction of the first commercial computers around 1950.
The vaccuum tube quickly gave way to the transistor.
A transistor serves as a high-speed electronic switch for binary
signals, but it is smaller, cheaper and requires much less power
than a vacuum tube.
The ferrite cores becoming the dominant technology for main
memories until superseded by all-transistor memories in the
Magnetic disks became the principal technology for secondary
In Second Generation Computer more registers were added to
the CPU to facilitate data and address manipulation compare to
IAS (First Generation Computer). For an example, Index
Index registers make it possible to have indexed instructions,
which increment or decrement a designated index I before
(or after) they execute their main operation.
Introduced input-output processors (IOP), which are special-purpose processing units designed exclusively to control IO operations. Hence IO data transfers can take place independently of the CPU, permitting the CPU to execute user programs while IO operations are taking place.
“High Level” Programming Language introduced mid 1950.
High level language are far easier to use than assembly language.
A high level language is intended to be usable on many different
A special program called a compiler translates user program from
the high-level language into machine language.
First successful high-level language was FORTRAN (from
FORmula TRANslation) developed by an IBM group under the
direction of John Backus from 1954 to 1957. It permits only
First business application high-level language was COBOL
(Common Business Oriented Language) developed by group
representing computer users and manufacturers in 1959 and
sponsored by the US Department of Defense. It permits both
textual as well as numerical operations.
Mid 1990s Basic, Pascal, Modula 2, C, and Java to became more
popular high level language.
With the improvement of IO equipment and programming methodology that came with the second-generation machines, it became feasible to prepare a batch of jobs in advance, store them on magnetic tape and then have the computer process the jobs in one continuous sequence, placing the results on another magnetic tape. This mode of system management is termed batch processing.
Batch processing requires the use of a supervisory program called a batch monitor, which is permanently resident in main memory.
A batch monitor is a rudimentary (basic) version of an operating
system. Later computer introduce multiprogramming and time-sharing systems.
The Third Generation:
Integrated Circuits (IC), which first commercially appear 1961 to replace transistor (discrete electronic circuits) used in second generation.
The transistor continued as the basic switching device, but IC allowed large numbers of transistor associated components to be combined on a tiny piece of semi conductor material, usually silicon.
IC technology initiated a long-term trend in computer design toward smaller size, higher speed and lower hardware cost.
Structure of the IBM System/360
In Figure the various System/360 model were designed to be software compatible with one another, meaning that all models in the series shared a common instruction set.
Programs written for one model could be run without modification on any other; only the execution time, memory usage and the like would change.
Software compatibility enabled computer owners to upgrade their systems without having to rewrite large amounts of software.
The System/360 models also used a common operating system, OS/360 and the manufacturer supplied specialized software to support such widely used applications as transaction processing and database management.
The System/360 series was also remarkably long-lived. It evolved into various newer mainframe computer series introduced by IBM over the years, all of which maintained software compatibility with the original System/360; for example, the System/370 introduced in 1970, the 4300 introduced in 1979 and the System/390 introduced in 1990.
It had about 200 distinct instruction types (opcodes) with many addressing modes and data types, including fixed-point and floating-point numbers of various sizes.
It replaced the small and unstructured set of data register (AC, MQ, etc) found in earlier computers with a set of 16 identical general-purpose registers, all individually addressable. This is called the general-register organization.
The System/360 had separate arithmetic-logic units for processing various data types; the fixed-point ALU was used for address computations including indexing.
The 8-bit unit byte was defined as the smallest unit of information for data transmission and storage purposes.
The System/360 also made 32 bits (4 bytes) the main CPU word size, so that 32 bits and “word” have become synonymous in the context of large computers.
The CPU had two major control states: a supervisor state for use by the operating system and a user state for executing application program.
Certain program control instruction
Certain program control instruction were “privileged” in that they could be executed only when the CPU was in supervisor state. These and other special control states gave rise to the concept of a program status word (PSW) which was store in a special CPU register, now generally referred to as a status register (SR).
The SR register encapsulated the key information used by the CPU to record exceptional conditions such as CPU-detected errors (an instruction attempting to divide by zero, for example), hardware faults detected by error-checking circuits and urgent service requests or interrupts generated by IO devices.
The architecture includes the computer’s instruction set, data formats and addressing modes as well as the general design of its CPU, main memory and IO subsystems.
The architecture therefore defines a conceptual model of a computer at a particular level of abstraction.
A computer’s implementation, on other hand, refers to the logical and physical design techniques used to realize the architecture in any specific instance.
The term computer organization also refers to the logical aspects of the implementation but the boundary between the terms architecture and organization is vague.
Fourth Generation – Very Large Scale Integration (VLSI)
VLSI allows manufacturers to fabricate a CPU main memory or even
All the electronic circuits of a computer on a single IC that can mass-
produced at very low cost.
An IC is an electronic circuit composed mainly of transistors that is
Manufactured in a tiny rectangle or chip of semiconductor material.
The IC is mounted into a protective plastic or ceramic package, which
provides electrical connection points called pins or leads that allow the
IC to be connected to other ICs to input-output devices like a keypad
or screen or to power supply.
A multichip module is a package containing several IC chips attached
to a substrate that provides mechanical support, as well as electrical
connections between the chips.
Packaged ICs are often mounted on a printed circuit board that
Serves to support and interconnect the ICs.
A contemporary computer consists of a set of ICs, a set of IO devices
and a power supply. The number of ICs can range from one IC to
several thousand, depending on the computers’ size and the types of
IC density: An integrated circuit is roughly characterized by its
density, define the number of transistor contained in the chip.
The first commercial IC appeared in 1961 – contained fewer than
100 transistors and employed small-scale integration or SSL. The
Terms medium-scale, large-scale and very-large-scale integration
(MSI, LSI and VLSI respectively) are applied to ICs containing
hundreds, thousands and millions of transistors respectively.
There are two of the densest chip:
The dynamic random-access memory (DRAM), a basic component
of main memories.
A single chip CPU or microprocessor.
IC families: There are two important technology in IC families which
are bipolar and unipolar.
Unipolar is normally referred to as MOS (metal-oxide-semiconductor)
after its physical structure.
Both bipolar and MOS circuits have transistors as their basic
The fifth Generation ??? :
The fifth generation is thought to be the intelligent Computers era.
This may be a joke today- but it is so close to the future!!
Modified from the lecture given by Dr. Md. Fokhray Hossain