The use of information technology is well recognized. It has become a must for the survival of the common man as well as big business houses. A computer is one of the major components of information technology. Today, computer technology is being used in varied areas, from weather forecasting to horoscope making, from elegant graphics shown during the telecast of a cricket match to catering to the data management needs of a multinational bank. Everywhere we are witnessing the ease with which tasks are being performed by computerization.
WHAT IS A COMPUTER?
The term computer is defined in the dictionary as "An automatic electronic apparatus for making calculations or controlling operations that are expressible in numeric or logical terms".
This definition describes the computer as an electronic apparatus although the first computers were mechanical and electromechanical. This definition also points towards the two major areas of computer application viz. data processing and computer-assisted control operations. Another major point raised here is that computers can perform operations which can be expressed only in logical or numeric terms.
ADVANTAGES OF COMPUTERS
A computer has three advantages, namely:
Speed: A computer can perform a work much faster than a human being. For simple operations such as adding, subtracting, copying and moving numbers or letters the computer requires a few microseconds (in case of small machines) and 80 nanoseconds or less (in case of larger ones).
Accuracy: In addition to being very fast, computers can perform hundreds of thousands of operations every second, and their circuits can run errorless for days at a stretch. A computer can perform accurate calculation each and every time. The errors that occur in a computer are mainly human rather than technological weaknesses or inaccurate data.
Diligence: If a human being is asked to perform a particular work repetitively he soon gets bored and his productivity level falls. The same work when performed number of times by the computer will give you the same result that many times, without losing its efficiency.
COMPUTING CONCEPTS
DATA Vs. INFORMATION
The word data is the plural of datum, which means fact. So data is facts, the raw material of information. But data is information only in a limited sense. As used in data processing, information is data arranged in a particular order and form, useful to the people who receive it. In other words, information is relevant knowledge produced as an output of data processing operations and acquired by people to enhance understanding and to achieve specific purposes.
In computing three major activities are performed. They are inputting the data, processing that data and getting the output.
Input operations
A computer can accept data from a wide range of input devices such as keyboards, mouse and light pens thus making human-machine communication possible.
Processing of data
The data which has been fed in needs to be processed in order to have useful and relevant information. The computer can perform calculations on numbers and they are equally able to manipulate the numerical, letters and other symbols used in words and sentences, as well as perform logical operations quite easily.
An example
We have input two data items A and B and we want to compare them. There can be only three possible outcomes either A<B or A=B or A>B. The computer is able to perform this simple comparison and then depending on the result, follows a predetermined course of action to complete its work.
Output
After processing is completed the computer supplies the processed data to a wide range of output devices such as display screens, printers etc. This is the final stage of the computing process, which makes relevant information available to the user out of the raw data which he had supplied to the computer.
In short, this whole process is known as the I-------P-------O cycle.
EVOLUTION OF COMPUTERS
The ancestors of modern age computers were the mechanical and electromechanical devices. This ancestry can be traced as far back as the 17th century when the first machine capable of performing the four basic mathematical operations of addition, subtraction, multiplication and division, appeared.
The very first attempt towards this automatic computing was made by Blaise Pascal. He invented a device called Pascaline that consisted of lots of gears and chains and was used to perform repeated addition and subtraction.
This was followed by an innovation by Charles Babbage, the grandfather of modern day computers. He designed the:
Difference Engine
Analytical Engine
DIFFERENCE ENGINE
It was used for solving polynomial and trigonometric functions as it was based on the mathematical principles of finite differences and was used to solve calculations on large numbers using a formula.
ANALYTICAL ENGINE
It was a general-purpose computing device, which could be used for performing any mathematical operation automatically.
The basic drawback of these mechanical and electromechanical computers were:
Friction/inertia of moving components had limited the speed.
The data movement using gears and liner was quite difficult and unreliable.
The next step was to have switching and storing mechanisms with no moving parts and then the electronic switching technique triode vacuum tubes were used and hence the first electronic computers were born.
FIRST GENERATION COMPUTERS
The origin of the first truly general-purpose computer was designed to meet the requirements of World War II. The Electronic Numeric Integrator and Calculator (ENIAC) was designed in 1945 at the university of Pennsylvania to calculate figures for thousands of gunnery tables required by the U.S. Army for accuracy in artillery fire. The ENIAC ushered in an era of what is known as first generation computers.
It could perform 5000 additions or 500 multiplications per minute. It was, however, a giant machine, occupying a number of rooms, required a great amount of electricity and emitted excessive heat.
ENIAC used vacuum tube technology and was based on decimal arithmetic rather than binary arithmetic.
It needed to be programmed manually by setting switches and plugging or unplugging. Hence, to pass a set of instructions to the computer was both cumbersome and time-consuming.
SECOND GENERATION COMPUTERS
The second generation computers were based on transistor technology. A transistor is a two state devicemadefrom silicon. It is cheaper, smaller and dissipates less heat than vacuum tubes but can be utilized in a way similar to that of vacuum tubes.
Computer generations are basically differentiated by a fundamental hardware technology. Each new generation of computers is characterized by greater speed, larger memory capacity and smaller size than the previous generation. Thus the second generation computers were more advanced in terms of Arithmetic Logic Unit (ALU) and control unit than their counterparts of the first generation.
One of the main computer series during this time was the IBM 700 series.
THIRD GENERATION COMPUTERS
The third generation computers were ushered in at the start of the microelectronic era with the invention of Integrated Circuits (ICs). In an integrated circuit, components such as transistors, resistors and conductors are fabricated on semiconductor material such as silicon. Thus a desired circuit can be fabricated in a tiny piece of silicon. Hundreds or even thousands of transistors could be fabricated on a single wafer of silicon.
The advantages of having densely packed integrated circuits are:
Low cost
Greater operating speed
Better portability
Reliability
Some of the examples of third generation computers are the IBM System /360 Family and DEC
PDP/8 Systems.
LATER GENERATION COMPUTERS
With the advent of, Very Large Scale Integration (VLSI) where thousands of transistors can be integrated on a single chip, a milestone in IC technology, we saw the emergence of machines with much more powerful Central Processing Units (CPUs) and very large memories. These machines had VLSI chips as their brains, which led to the development of very small but very powerful machines. A whole computer circuit was soon available on a single chip, the size of a thumb. This made the machines inexpensive and suddenly it became possible for everyone to own a computer.
The VLSI technology is still evolving and increasingly powerful microprocessors and more storage space is now being put in a single chip. The contemporary computers are characteristic of this generation.
CLASSES OF COMPUTERS
Computers are classified into three main groups. These are:
Microcomputers
Minicomputers
Mainframes
With ongoing developments in technology these distinctions are becoming blurred. Despite this, it is important to classify them in order to differentiate the key elements and architecture among the different classes.
MICROCOMPUTERS
A microcomputer
s CPU is a microprocessor which originated in the late seventies. The first microcomputers were built around 8-bit microprocessor chips. This means that the chip can retrieve instruction/data from storage, manipulate and process an 8-bit data at a time. In other words, it has a built-in 8-bit data transfer path. Examples of 8-bit microprocessor chips are:
Zilog Z80, Intel 8080 and MC 6809 etc
An improvement on 8-bit chip technology was a series of 16-bit chips namely the 8086 and 8088 which were introduced by Intel Corporation, each one with an advancement over the other.
MINICOMPUTERS
In the beginning, minicomputers were 8-bit and 12-bit machines but by the seventies almost all minicomputers were 16-bit machines. The 16-bit minicomputer had the advantage of a large instruction set and address field, and efficient storage and handling of text in comparison to lower bit machines. With the advancements in technology, speed, memory size and other characteristics developed and minicomputers were then used for various standalone or dedicated applications. The minicomputer has since then been used as multi-user system, which may be accessed by various users at the same time.
MAINFRAME COMPUTERS
Mainframe computers are generally 32-bit machines or on the higher side. These are suited to big organizations, to manage high volume applications. Few popular mainframe series are MEDHA, Sperry, DEC, ICL, etc. Mainframes are also used as central host computers in distributed systems. Libraries of application programs developed for mainframe computers are much larger than those of micro or minicomputers because of their evolution over several decades in the family of computing.
SUPERCOMPUTERS
The upper end of the state-of-the-art mainframe machines are the supercomputers. These are the fastest machines in terms of processing speed and use multiprocessing techniques. This means that a number of processors are used to solve a problem. The various range of supercomputers include CORAY (CRAY YMP, CRAY2), ETA (EDC-ETA10, ETA20) and IBM 3090 etc. India has also developed a supercomputer called PARAM developed by C-DAC.
PHYSICAL COMPONENTS AND PERIPHERAL DEVICES
INPUT DEVICES
These are used for transferring user commands or choices to the computer:
Keyboard
Mouse
Light pen
Scanner
Voice/Speech input
Source data automation
Digitizers
Keyboard
The keyboard is one of the most common input devices for the computers. The layout of the keyboard is like that of the traditional QWERTY typewriter, although there are some extra commands and function keys provided for.
Mouse
The keyboard provides the facility to input data and commands to the computer in text form. In case we need to point to some area in the display to select an option or move across on the screen to select subsequent options we need pointing devices and one such pointing device is the mouse. The mouse is a handy device, which can be moved on a smooth surface to simulate the movement of the cursor on the display screen. The user can move the mouse, stop it at a point where the pointer is to be located and with the help of buttons make a selection of choices.
Light pen
This is a pen-shaped pointing device allowing natural movement on the screen. The pen contains light receptors and is activated by being pressed against the display screen. The receptor is the scanning beam that helps in locating the pen
s position. Suitable system software is provided to initiate necessary action when we locate the area on the display surface with the help of a light pen.
Scanners
Scanners facilitate the capturing of information and storing them in graphic format for displaying back on the screen. A scanner consists of two components:
The first one to illuminate the page so that the optical image can be captured.
The second is to convert the optical image into digital format for storage by the computer.
Voice/Speech input
Voice recognition along with several other techniques have come into the limelight recently. They convert voice signals to appropriate words and derive the correct meaning of the words, thus eliminating the need for keying-in data and making it possible for a casual user to use the computer very easily. Limited success has been achieved in this area and devices are available commercially to recognise and interpret human voices within a limited scope of operation.
Source data automation
Some of the most common equipment used for source data automation, and which capture data as a by-product of a business activity thereby completely eliminating manual input of data are Magnetic Ink Character Recognition (MICR), Optical Mark Recognition (OMR) and Optical Bar Code Reader (OBCR).
Digitizers
A digitizer is an input device that converts graphic and pictorial data to digital form (binary form) which can be directly fed into the computer and stored there. There are two types of digitizers: rectangular coordinates or flatbed digitizers and image scan digitizers.
In case of a flatbed digitizer, the drawing to be digitized is spread and fixed over a rectangular flatbed table. A mechanism is now moved over the surface of the drawing, scans the drawing and produces signals related to the x and y coordinates of the table.
Image scan digitizers can scan and reproduce entire drawings and photographs automatically. They are costlier and more powerful than the flatbed digitizers and are capable of digitizing not only the shape and size of the drawings but also varying intensities on gray-to-black scale at different points of the drawings.
Thus, flatbed digitizers are mainly used to digitize simple drawings, graphs, charts etc., and image-scan digitizers are used to digitize more complex pictures and photographs.
OUTPUT DEVICES
Output can normally be produced in two ways: either on a display unit/device or on a paper.
Display devices
One of the most important computer peripherals is the Visual Display Unit (VDU). Conventional computers display terminals are known as alphanumeric terminals because they are used to read text informations displayed on the screen. Now the VDUs support graphic displays which are made up of a series of dots called pixels whose patterns produces the image. Each dot on the screen is defined as a separate unit, which is directly addressed, and so there is much greater flexibility in drawing pictures.
Printers
Printers are used for producing output on paper. The three most commonly used printers are:
Dot-matrix printers: This printer is mostly used in personal computing systems. These printers are relatively cheaper compared to other technologies. This uses impact technology and a print head containing a bank of wires moving at high speed against inked ribbons on paper. Characters are produced in matrix format and the speed ranges from 40 Characters Per Second (CPS) to about 1000 CPS. A disadvantage is the low print quality and the noise produced by these types of printers.
Inkjet printers: These print by spraying a controlled stream of tiny ink droplets accurately on the paper form either Dot-matrix or solid characters. These are non-impact and hence are relatively silent and high quality printers. The typical speed ranges from 50 CPS to more than 300 CPS and this technology is being used w
well enough for production of color printing and elaborate graphics.
Laser printers: This is high quality, high speed and high volume technology which works in a non-impact fashion on plain paper or pre-printed forms. Printing is achieved by deflecting laser beams onto the photosensitive surface of a drum and the latent image attracts the toner to the image areas. The toner is then electrostatically transferred to the paper and fixed into a permanent image. Speeds can range from 10 pages a minute to about 200 pages per minute. The technology is quite expensive but is becoming popular due to quality, speed and noiseless operations.
Impact Printers vs. Non-Impact Printers
Impact printers used variations of the standard typewriter
s printing mechanism where a hammer strikes paper through ink-ribbon. Non-impact printers use chemical heat, or electrical signals to induce symbols on paper.
Fully formed characters vs. Dot-matrix printers
Fully formed characters are constructed from solid lines and curves like the characters of typewriters whereas a Dot-matrix printer is made up of a carefully arranged sequence of dots packed very close to each other.
Serial vs. line vs. page printers
This indicates the amount of information a printer can output within a single cycle of operation. Serial printing is done character by character whereas line printing forms an entire line and prints a single line at a time. A page printer outputs a whole page character and images simultaneously during one cycle.
Plotters
A plotter is an output device used to produce hard copies of graphs and designs. Plotters are basically of two types: drums and flatbed. In case of drum plotter, the paper on which the design has to be made is placed over a drum that rotates back and forth to produce a vertical motion. The mechanism also consists of one or more penholders mounted horizontally across the drum. The pen(s) clamped in the holder can move to produce a horizontal motion. Under the control of the computer the drum and the pen move simultaneously to produce the designs and graphs.
A flatbed plotter plots on papers that are spread and fixed over a rectangular flatbed table. In this type the paper does not move and the pen holding mechanism is designed to provide all the motion. Here also provision is there to mount more than one pen in the pen(s) holding mechanism.
Plotters are normally very slow in motion because of the excessive mechanical movement required during plotting, resulting in a great mismatch between the speed of a CPU and that of the plotter. Due to this reason, in most cases, output is first transferred by the CPU onto a tape and the plotter is then activated to plot the design taking information from the tape. However, in case of a computer system dedicated to design work, the CPU may send output directly to a plotter.
LANGUAGES
A language is a system of communication. With natural languages such as Hindi, English etc., we communicate our ideas and emotions with each other. Similarly, a computer language is a means of communication between people and the computer. With help of a computer language a programmer tells a computer what he wants it to do. As all natural languages follow rules of grammar and have a defined set of vocabulary, similarly all computer languages also follow their own grammar or what is known as syntax and a set of symbols.
Programming languages have progressed from machine-oriented languages that use strings of binary 0s and 1s to problem-oriented languages that use common mathematical and/or English terms.
All computer languages can broadly be classified into three categories:
Machine language
Assembly language
High level language
MACHINE LANGUAGE
This is a sequence of instructions written in the form of binary numbers consisting of 1s and 0s to which the computer responds directly. An instruction prepared in any machine language will have atleast two parts. The first part is the command or operation, which tells the computer what function is to be performed. All computers have an operation code for each of its functions. The second part of the instruction is the operand which tells the computer where to find or store the data that has to be manipulated.
Machine languages are faster in execution since the computer directly starts executing them. But it is difficult to understand and develop a program using machine language. This language is known as the first generation language.
ASSEMBLY LANGUAGE
When we employ symbols (letters, digits or special characters) for the operation part, the address part and other parts of the instruction code, this representation is called assembly language program. This is considered a second generation language.Machine and assembly languages are called low level languages since the coding for a problem is at the individual instruction level. Each machine has its own assembly language which is dependent on the internal architecture of the processor. An assembler is a translator, which takes its input in the form of an assembly language and produces machine language code as its output.Assembly language program object code in machine language
HIGH LEVEL LANGUAGES
COBOL, FORTRAN and BASIC are examples of high level languages. The time and cost of creating assembly language programs was quite high. Moreover, assembly and machine languages were quite difficult for the programmer to use and also remember. This led to the advent of high level languages.
Compiler
High level source programs must be translated into a form the machine can understand. This is done by a software called compiler, which takes the source code as the input and produces as output the machine language code of the machine on which it is to be executed. During the process of translation, the compiler reads the source program statement-wise and checks for syntactical errors. If there is any error the system generates a printout of errors that are detected.
Interpreter
There is another type of software which also does the translation. This is called an interpreter. An interpreter translates the program line by line. Each time the program is executed, every line is checked for syntax errors and then converted into its equivalent machine code. The execution time of a translator is more than that of a compiler.
FOURTH GENERATION LANGUAGES (4GLs)
Most third generation languages are procedural languages, which means that the programmer must specify the steps, which is the procedure that the computer has to follow in a program. In contrast, most fourth generation languages are non-procedural languages. The programmer does not have to give the details of the procedure in the program, but simply specifies what is required.Major fourth generation languages are used to get information from files and databases. These fourth generation languages contain a query language, which is used to answer queries or questions with data from a database. To produce complex reports some languages provide a facility of report generators, in which, the programmer specifies the headings, detailed data and totals needed in a report. The report generator produces the desired report using the specified data and headings.
TYPES OF SOFTWARE
A computer cannot do anything on its own; it must be instructed to do a desired job. Hence it is necessary to specify a sequence of instructions that a computer must perform to solve a problem. Such a sequence of instructions written in a language that can be understood by a computer is known as computer program. The term software refers to the set of computer programs, procedures and associated documents, whose objective is to enhance the capabilities of the hardware machine.
Computer software is normally classified into two broad categories:
Application software
System software
APPLICATION SOFTWARE
Application software packages are designed for specific computer applications, such as payroll processing, inventory control, etc. There are two main categories of application software:
Business software
Scientific application software
These software packages can also be categorized as pre-written software packages e.g. Tally, an accounting package, and customized application software e.g. a payroll package developed according to the specifications given by a company.
SYSTEM SOFTWARE
System software is a set of one or more programs designed to control the operations of a computer system. These programs do not solve specific problems. They are general programs written to assist humans in the use of the computers by performing tasks, such as controlling all the operations required to move data into and out of a computer and all the steps in executing an application program. An example of system software is an operating system, which consists of many other programs for controlling input/output devices, memory, processors etc.
FIRMWARE
Computer software in conventional systems is supplied on storage media like floppies, tapes, disks etc. However, with the advancements in technology and the reduction in hardware cost, software today is also being made available by many computer manufacturers on Read Only Memory (ROM) chips. These ROM chips can be easily plugged into the computer system and they form a part of the hardware. Such programs (software) made available on hardware are known as firmware.Initially, only systems software was supplied in the form of firmware. But today, even application programs are being supplied in firmware form. Dedicated applications are also programmed in this fashion and available in firmware. Because of the rapid movement in memory technology, firmware is frequently a cost-effective alternative to wired electronic circuits and its use in computer design will increase. It is expected that in the near future, firmware will make the cost-effective production of smart machines of all types possible.
FILE SYSTEM
A file is a collection of data under a single filename. Every operating system has its own file storage system. The most common storage system is the hierarchical file storage system, which is followed by most of the operating systems including UNIX, Windows-NT etc. In this system, directories are at the top of the hierarchy, which can contain files and subdirectories. The files are of two types:
System files
Data files
A system file usually contains the code, which in turn contains the system configuration or a software program. A data file contains data in the form of records, which can be further divided into fields and field-values.Files are generally stored in a hierarchical structure.
@INTRODUCTION
In the previous session we had a general discussion about the impact of computers in our life, the evolution of computers, the various type of computers available and also the various input/output devices available. In this session we are going to cover the internal architecture of a computer, its memory management along with the need and use of an Operating System (OS).
INTERNAL ARCHITECTURE AND KEY PROCESSING
All computer systems perform the following five basic operations:
Input: The process of entering data into the computer system.
Storage: Saving data and instructions so that they are available for initial processing as and when required.
Process: Performing arithmetic and/or logical operations on data in order to convert them into useful information.
Output: The process of producing useful information for the user, such as a printed report or visual display.
Controlling: Directing the manner and sequence in which all of the above operations are performed.
The internal architecture of computers differs from one model to another. However, the basic organization remains the same for all the systems. This consists of five basic units namely input unit, output unit, storage unit, ALU and control unit.
Input unit
The task of entering data and instructions into the computer so that any kind of computing can be performed on it is done by the input unit. The input unit converts these instructions and data into computer acceptable form. Finally, it supplies the converted instructions and data to the computer system for further processing.
Output unit
The job of the output unit is just the reverse of the input unit. It supplies the information and results of computation to the user. It accepts the results performed by the computer, which are in coded form, converts these coded results to human understandable form and finally supplies the converted results to the outside world.
Storage unit
The data and instructions supplied by the input unit are needed to be stored before the actual processing can start. Similarly, the results produced by the computer system also need to be stored before they are passed to the output unit. The intermediate results produced as the part of the ongoing computing process must also be preserved. The storage unit or the main memory is designed for this work.
Arithmetic Logic Unit (ALU)
The ALU of a computer is the place where the actual execution of the instructions takes place. All calculations and comparisons take place inside the ALU. All instructions and data are transferred to the ALU from the main memory, where processing takes place. Intermediate results can be transferred back to the main memory as and when the need arises. ALUs are designed to perform four basic arithmetic operations: add, subtract, multiply and divide and the logical operations or comparisons such as less than, equal to, or greater than.
Control unit
The control unit of a computer system performs the task of controlling other units, e.g. it informs the ALU what is to be done to the inputted data. By selecting, interpreting and seeing to the execution of the program instructions, the control unit is abl
e to maintain order and direct the operation of the entire system. It manages and coordinates the entire computer system. It obtains instructions from the program stored in the main memory, interprets the instructions and issues signals that causes other units of the system to execute these instructions.
INSTRUCTION EXECUTION
The basic function performed by a computer system is the execution of a program. The program, which is to be executed, is a set of instructions that are stored in memory. The Central Processing Unit (CPU) executes the instructions of the program to complete the task.
The simplest model of instruction processing is a two-step process. The CPU reads (fetches) instructions from the memory one at a time and executes the operation specified by that instruction. The processing needed for a single instruction (fetch and execution) is referred to as instruction cycle. The instruction cycle includes a fetch cycle, decode cycle, execute cycle and write cycle. The fetched instruction is in binary form and is loaded onto an Instruction Register (IR) in the CPU. The CPU interprets the instruction and does the required action.
In general, these actions can be divided into the following categories:
Data transfer
Data processing
Sequence control
INTERRUPTS
The term interrupt is defined as any exceptional event that causes the CPU to temporarily transfer its control from currently executing program to a different program which provides the service to the exceptional event. An interrupt may be generated by a number of sources, which may be internal or external to the CPU. Interrupts are useful mechanisms for the improvement of processing efficiency.
MEMORY MANAGEMENT
Till now we have discussed how instructions and data are inputted into a computer system and how it is processed. We also learnt that to execute an instruction we need to store the data in the main memory. Now we are going to discuss how this memory is managed, for which we must understand the binary number system.
Binary number system
A computer works by electrical impulses. Hence a binary number system which uses only two digits, namely, 0 and 1 is a convenient way to represent information inside a computer. Usually symbol
is used to represent the presence of an electric pulse and symbol
to represent the absence of it.
Information in a computer consists of data and instructions which are made up of large number of characters, namely, decimal numbers from 0 to 9, alphabets A to Z, arithmetic operators like (+), (-) etc., relational operators like (<), (>), (=) etc. and other special characters like (,), (.) etc. Computers use eight binary digits (bits) to represent information internally. This allows up to 28 =256 different characters to be represented uniquely. A collection of eight bits is called a byte.
MEMORY MEASUREMENT
Information is stored in memory cells, also called the memory locations. Each memory location has a unique address. The amount of memory a location can hold is measured in kilobytes (KB) or megabytes (MB). One KB stands for 210 byte that is 1024 bytes. One MB stands for 210 KB which is a little over one million bytes.
A computer is provided with two types of memories. The first one caters to the immediate needs of processing known as the primary memory and the second one to store various programs for long-term use known as secondary memory.
Primary memory
Primary memory consists of semiconductor memory chips and is used to store the data and programs currently in use. Each storage element of memory is directly (randomly) accessible and can be examined and modified without affecting other cells. Hence, primary memory is also called Random Access Memory (RAM). RAM is called temporary memory because the cells lose their contents when the electricity supply to the computer is stopped.
Another part of main memory is ROM. It is not possible to write data onto ROMs when they are on-line to the computer, they can only be read. ROMs are non-volatile in nature and need not be loaded in a secondary storage device. ROMs can be written only at the time of manufacture. ROMs are used to help the computer to start. It contains programs that are permanently embossed onto it and are used in the start-up process. Another similar memory is Programmable ROM(PROM). PROMs are also non-volatile and can be programmed only once by a special write device, hence the name Programmable ROM. The writing process in PROM can be performed electrically by the supplier or the customer.
<tablecwidths="3258,3131"><trow><cell>Read Only Memory (ROM)
</cell><cell>Random Access Memory (RAM)
</cell></trow><trow><cell>ROM is read only memory, i.e. computer can only read from this memory, nothing can be written onto it.
</cell><cell>Ram is random access memory, i.e. computer can perform both the functions of reading and writing.
</cell></trow><trow><cell>ROM contains bootable instructions which cannot be changed.
</cell><cell>RAM stores the instructions and the data given by the user.
</cell></trow><trow><cell>ROM is permanent and is an example of firmware.
</cell><cell>It is volatile in nature, i.e. what ever is stored onto RAM is lost whenever its electricity supply is stopped.
</cell></trow></table>Table 2.1: Difference between ROM and RAM
Secondary memory
It is desirable that the operating speed of the primary memory storage of a computer be as fast as possible because most of the data to and from the processing unit is via the main memory. For this reason storage devices with very fast access time, such as semiconductors, are generally chosen for main memories. The cost of these devices is quite high. Thus the storage capacity of the main memory of a computer system is limited. Often it is necessary to store many millions, sometimes billions, of bytes of data.
Unfortunately, the storage capacity of the primary storage of today
s computer is not sufficient to store the large volume of data handled by most data processing centers. As a result, additional memory, called the Auxiliary Memory or Secondary Storage, is used with computer systems. This section of the computer memory is also referred to as backup storage because it is used to store large volumes of data on a permanent basis which can be partially transferred to the primary storage as and when required for processing. Data is stored in the same binary codes as in main storage and is made available to main storage when needed. A wide range of secondary storage devices are needed for this.
Typical hardware devices used for secondary storage are magnetic tapes and magnetic disks.
Magnetic disk
A magneticdisk is a thin, flexible, circular tape made of vinyl material and coated on both sides with magnetic material. This is the reason they are called floppy disks. These disks come in many shapes and sizes like 5.25 inches and 3.5 inches and their capacity ranges from 360 KB to 1.44 MB per disk.
With the help of these disks information can be carried from one place to another. But you can only store a limited amount of information in these disks. To use these disks you need disk drives. The type of disk to be used in a system depends upon the type of disk drive in that system.
We also have magnetic storage disks made of rigid material like aluminum, which are called hard disks. Hard disks have a much larger storage capacity in comparison to the smaller floppy disks ranging from 640 MB to 3.5 GB.
The data is read and written onto from the disks with the help of a conducting coil called head. The head remains stationary while the disk rotates below it for reading or writing operations. The data is stored in concentric circles called tracks. The width of a track is equal to the width of the head. The data is transferred from the disks in blocks. A block is a section of disk data and is normally equal to a sector. A track is divided into several sectors and these sectors could be either fixed or variable length sectors. To identify a sector normally there may be a starting point on a track or a starting and ending point of each sector.
Magnetic tape
Magnetic tapes are mounted on reels or a cartridge or a cassette of tape to store large volumes of backup data. These are cheaper and since they are removable from the drive, they provide unlimited storage capacity. Since the recording is like a tape recorder used in audio systems, information retrieval will be only sequential and not random as is the case in magnetic disks. Hence the data retrieval is slow in comparison.
Optical memories
Optical memories are alternate mass storage devices with huge storage capacities. The advent of the compact digital audio system, a non-erasable optical disk, paved the way for a new low cost storage technology. In optical storage devices the information is written using laser beam. That is why they can store large amounts of information on them. The examples of optical memories are CD-ROM, WORM, Erasable Optical Disks etc.
CD-ROMs are very good for distributing large amounts of information or data to a large number of users. The main advantages of CD-ROMs are:
Large data storage capacityInexpensive and fast mass replication
These are removable disks, thus, are suitable for archival storage.
The disadvantages are:
It is read-only, therefore, cannot be updated
Access time is longer than that of a magnetic disk.
Operating system (OS)
An operating system is an essential component of a computer system. The primary objective of an operating system is to make a computer system convenient to use and to utilize hardware resources. An OS is a large collection of software, which manages the resources of the computer system such as memory, processor, file system and input/output devices. It also enforces the job priority system. That is, it determines and maintains the order, in which jobs are to be executed in the computer system, interprets commands and instructions, establishes data security and integrity and also maintains the internal time clock and log system usage for all the users.
Hence it facilitates easy communication between the computer system and the user. The working of the operating system is depicted in the figure given below.
An operating system could be classified broadly more in two categories: single user OS and multi-user OS .
A single user OS is one that can cater to the needs of one computer. These kinds of machines are known as standalone machines. A standalone machine does not know the world outside. Hence the operating system
s work, here, would be to manage the memory, file system and the input/output devices. Some of the examples of single user OS are MS-DOS, Windows-98 etc.
A multi-user OS can cater to the needs of a number of computers at a time. A multi-user OS is needed when we have a network of computers. So a multi-user operating system
s area of work now also widens as it has to look after the needs of several systems at a time. It allocates processing time to every system, as and when the need arises. All these tasks are performed with the help of technologies like multi-tasking and time-sharing, which will be covered in later modules. Some of the examples of multi-user operating systems are UNIX, Windows-NT etc.
BOOTING PROCESS
When we switch on the computer, it goes through a start-up process before it is ready for use. This process is also known as booting in computer jargon. The booting process involves hardware checks, memory checks and the loading of operating system files onto the computer
s internal memory. The first step of booting is called POST or Power On Self Test which checks the computer
s peripheral devices and the memory. The next step involves ROM, which loads the operating system files into the internal memory of the system. ROM searches for the operating system files in the floppy disk drive if the system has one, else it looks for those files in the hard disk and loads them.
@INTRODUCTION
In the last session, we discussed the internal architecture and the working of computers. We also discussed the use of an operating system. In this session we are going to focus on the MS-DOS, a single-user operating system. MS-DOS stands for Microsoft Disk Operating System. Microsoft, the world
s biggest software company developed DOS, in 1981, because of the need of an operating system which could cater to the needs of a single user.
In DOS we have two types of commands:
Internal commands
External commands
With the help of these commands we can interact with the computer, and can store data and can retrieve it as and when the need arises. The file commands are used to work on the files in which the data is actually stored. These files can be arranged in a place called a directory. To work upon directories we use directory commands.
File storage system
DOS like many other operating systems follows a hierarchical file storage system. This means that the data is stored in files, files are placed into directories and subdirectories. Every OS has a prime work area where it operates. This area is known as the root directory.
A directory structure is similar to a filing cabinet in an office. Information is stored in files and all the files belonging to a particular department are kept in drawers and are catalogued. When some information is required from a particular department, the file can be easily located and retrieved.
In DOS the drawer of the filing cabinet is the directory, containing related information which is stored in different files. The files are catalogued in DOS in a similar fashion as in the filing cabinet, so that the files can be searched easily. The searching and arranging of directories is done with the help of directory commands.
A directory can contain directories or files in it. A directory contained within a directory is known as a subdirectory and the directory which contains a subdirectory is known the parentdirectory. In the above example Dir 1
and Dir 2 are the subdirectories of Root and the Root is the parent directory of Dir 1 and Dir 2.
FILE NAMING CONVENTIONS
Each file in DOS has a unique name by which it is recognized. A file name in DOS has two parts, the primary name and the secondary name, or the extension, which are separated by a period (.). The primary name can be up to eight characters and the secondary name can be up to three characters and is optional.
The primary name usually depicts the contents of the file and the secondary name usually depicts the type of the file.
In some software packages the extension is automatically given, e.g. a file created in MS Word has .DOC as its extension. A file name could contain alphabets (A-Z), digits (0-9) and some special characters (_,&,%,$,@,+, etc.). There are certain special characters that cannot be used in a file name such as asterisk (*), question mark (?), hyphen (-) etc.
The following are the examples of some valid file names:
LETTERS.DOC
COMMAND.COM
EXPENSES.XLS
PRESSENT.PPT
DATA.DBF
The following are the examples of some invalid file names:
Introduction.doc
presonal.document
question.?
1-2.dig
star*.stud
PROMPT
When you switch on the computer system it goes through a complex start-up process called booting, as discussed in the last session. After the booting process is over the prompt appears on the screen. This is the place from where the user can issue commands to manipulate the computer. The prompt is depicted by the drive name from which the system has booted, followed by the path of the currently active directory. The prompt could look something like this:
C:\OFFICIAL\CORRES>_
The blinking underscore ( _ ) is known as the cursor, at which place the user can issue the commands.
COMMANDS
To manage the directory structure and to move and copy information from one place to another, DOS has many commands. It also provides certain general commands like DATE, TIME and CLS.
DATE command displays and changes the date on a system. System date is stored inside the system.
Syntax: DATE [mm-dd-yy]
Example: DATE [12-20-98]
<tablecwidths="2940"><trow><cell>C:>Date
Current date is Wed 01-13-1999
Enter new date (mm-dd-yy)_
</cell></trow></table>In a computer system, the date is stored in American format. In this format the first two digits depict the month, the second two digits depict the day of the month and the last two digits depict the year.
TIME command displays and changes system time. System time is also stored inside the system.
<tablecwidths="2940"><trow><cell>C:>Time Current time is 12:20:00.69p Enter new Time:
</cell></trow></table>Syntax: TIME [hh[:mm[:ss[.xx]]]]
Example: TIME 11:40, TIME 12, TIME 10:20:59.79
The residues of the last command can remain on the screen and these sometimes look very untidy. To clear the screen from these residues you can use the CLS command. This clears the screen and places the prompt at the top left hand corner of the screen.
Syntax: CLS
<tablecwidths="1530"><trow><cell>C:>
</cell></trow></table>Dir
One directory can store more than one file and subdirectories. In case the user forgets where he had stored
his file or he wants to search for a particular file or subdirectory, the user can use DIR command. The DIR command gives the directory listing of the specified directory. In case the directory is not specified, the command shows the directory listing of the current directory. The DIR command displays the filename, size of the file, date and time of last modification or creation, the space occupied in the specified directory and the unoccupied space.
Syntax: DIR [drivename][path][filename]
Example: DIR C:, DIR A:\Glooks
DIR/p/w
When the user gives the DIR command, the contents of the directory listing simply scroll off the screen. In case the listing is too long to fit onto one screen, the user will not be able to see the listing that has scrolled off the screen. To avoid this, the user can use the DIR/p option, which displays the file listing on one screen at a time. The DIR command also has another option, DIR/w that displays the directory listing width-wise. In DIR/w option only the file names are displayed.
Syntax: DIR [drivename][path][filename] /p /w
Example: DIR A:\looks /p, DIR C: /w/p
The CD command is used to change the current directory. If you do not specify the directory name then this command displays the name of the currently active directory.
Syntax: CD [drivename][path]
Example: CD, CD A:\Gorgeous
The MD command is used to create a new directory.
Syntax: MD [drivename][path]<directory name>
Example: MD mydir, MD A:\official\expenses
This command is used to remove the specified directory. There are some prerequisites in using this command. These are that the directory to be removed should be empty and it should not be the current directory.
Syntax: RD [drivename][path]<directory name>
Example: RD mydir, RD C:\dustbin
The COPY command is used to make duplicate of a file or a group of files. In this command you have to give two parameters. The first is the path and name of the file whose duplicate is to be made and the second parameter is the path of the directory where this duplicate copy is to be placed.
This command is used to delete or erase a file or a group of files.
Syntax: DEL [drivename][path]<filename>
Example: DEL A:\goodlook\hero.doc, DEL letters.doc
To change the name of the file the user can use REN command. In this command you also have to specify two parameters ? first the source path and the name of the file whose name is to be changed and the second parameter is the new name that is to given to the specified file.
A batch file is a file, containing a number of DOS commands which are executed sequentially. We need to type the primary name of the file at the prompt to execute it. All the batch files should have .BAT as their extension. Batch files are used when there is a need to execute a particular command regularly. To create a batch file COPY CON command is used. This is followed by the name of the file which should have .BAT as its extension.
After this the user can enter the commands to be stored in the file one by one. When the user is finished press the Ctrl+Z keys or F6 function key and then the Enter key to save the file. Now the batch file can be executed by just typing the primary name of the file at the prompt.
Example: COPY CON Test.bat
Copy C:\work\project.prg a:
Autoexec.bat File
The Autoexec.bat file has a special status in DOS. This file is automatically executed as soon as the system is booted. DOS searches for this file in the root directory and if it is found then all the commands stored in this file are automatically executed. The Autoexec.bat file usually contains commands to set up the environment of the system such as specifying the path and changing the prompt, etc.
GETTING STARTED WITH COMPUTERSR
!SESSION PLAN FOR COMPUTER BASICS!
7COMPUTER ARCHITECTURE AND INTERNAL PROCESSING STRUCTURER