Introduction to the microprocessor
Thursday, 12 April 2012Posted by
Crystal
0 Comments
History
The microprocessor is the combination of solid – state technology development and the advancing computer technologies which came together in the early 1970s. With the low cost of a device and the flexibility of a computer, microprocessor is a product which performs both control and processing functions.
A brief history
The microprocessor of two major technologies; digital computer and solid – state circuits. These two technologies came together in the early 1970s, allowing engineers to produce the microprocessor.
The digital computer is a set of digital circuit controlled by a program that makes it do the job you want done. The program tells the digital how to move process data. It does this by using the digital computer’s calculating logic, memory circuits, and I/O devices. The way the digital computer’s logic circuits are put together to build the calculating logic, memory circuits, and I/O devices is called its architecture.
The microprocessor is like the digital computer because both do computations under programming control.
Figure 1-1 shows the major events in the two technologies as they developed over the last five decades from the days of World War II.
During World War II, scientists developed computers for military use. The latter half of the 1994s, digital computer was developed to do scientific and business work, Electronic circuit technology also advanced during World War II. Radar work increased the understanding of fast digital circuits called pulse circuits. After the war, scientists made great progress in solid-state physics. Scientists at Bell Laboratories invented the transistor, a solid- state device, in 1948.
In the early 1950s, the first general-purpose digital computer appeared. Vacuum tubes were used for active electronic components. They were used to build basic logic circuits such as gates and flip-flops. Vacuum tubes also formed part of the machines built to communicate with the computer – the I/O (input/output) devices. The first digital computers were huge, because the vacuum tubes were hot and required air-conditioning. Vacuum tubes made the early computer expensive to run and maintain. Solid-state circuit technology also made great strides during the 1950s. The knowledge of semiconductors increased. The use of silicon lowered costs, because silicon is much more plentiful than germanium, which had been the chief material for making the early semiconductors. Mass production methods made transistors common and inexpensive.
In the late 1950s, the designers of digital computers jumped at the chance to replace vacuum tubes with transistors.
In the early 1960s, the art of building solid-state computers was divided in two directions. The first direction was building huge solid-state computer by IBM. IT still required large, air-conditioned rooms and very complicated. IT could process large amounts of data. These large data processing systems were used for commercial and scientific application.
The big computer was still very expensive. In order to pay for it had to be run 24 hours a day, 7 days a week. Another direction of development is began building small computers. These minicomputers were not as powerful as their larger relatives, but they were not as expensive either. And they still performed many useful functions. By the early 1960s, the semiconductor industry found a way to put a number of transistors on one silicon wafer. The transistors are connected together with small metal traces. When the transistors are connected together, they become a circuit which performs a function, such as a gate, flip-flop, register, or adder. This new technology created basic semiconductor building blocks. The building blocks or circuit modules made this way are called an integrated circuit (IC).
By the mid-1960s, the technology of ICs pushed to develop low-cost manufacturing techniques. The use of ICs let minicomputers become more and more powerful for their size. The desk-sized minicomputers of the 1960s became as powerful as a room-sized computer of the late 1950s. Now $10,000, drawer-sized minicomputers were as powerful as the older $100,000.
The late 1960s and early 1970, large-scale integration (LSI) become common. Large-scale integration was making it possible to produce more and more digital circuits in a single IC.
By the 1980s, very large-scale integration (VLSI) gave us ICs with over 100,000 transistors. By the mid 1970s, LSI had reduced the calculator to a single circuit. After the calculator was reducing, the next natural step was to reduce the architecture of the computer to a single IC. The microprocessor was the resulting circuit of achievement. The microprocessor made possible the manufacture of powerful calculators and many other products. Microprocessor could be programmed to carry out a single task> Products like microwave ovens, telephone dialers and automatic temperature-control systems become common place.
The early microprocessor processed digital data 4 bits (4 binary digits) at a time. These microprocessors were slow and did not compare to minicomputers. But new generations of microprocessors came fast. The 4 bit microprocessors grew into 8 bit microprocessors, then into 16 bit microprocessors, and then into 32 bit microprocessors. During the early 1980s, complete 8 bit microprocessor systems (microprocessors with memory and communications ability) were developed. These microcontrollers, or single-chip microprocessors, have become popular as the basis of controllers for keyboards, VCRs, TVs, microwave ovens, smart telephones, and a host of other industrial and consumer electronic devices.
Subscribe to:
Post Comments (Atom)