- Input / Output devices are required for users to communicate with the computer.
- These input/output devices are also known as peripherals since they surround the CPU and memory of a computer system.
A computer is a machine that works on the principle of IPO cycle. In IPO cycle, data and instructions are entered. They are processed, stored and finally the result is given out.
- Instructions and commands given to the computer are called Inputs or data.
- A computer accepts, examines and calculates the result. This is called processing
- The result given by the computer after processing is called output.
1.Devices used to provide data and instructions to the computer are called Input devices.
2.An input device is any hardware device that sends data to the computer.
Some important Input devices are
- Light Pen
- Touch pad
- Touch Screen
- Optical Character Recognition (OCR)
- Optical Mark Reader (OMR)
- Bar Code Reader
- Magnetic-ink Character Recognition (MICR)
- The computer keyboard is used to enter text information into the computer, as when you type the contents of a report.
- The most popular keyboard used today is the QWERTY keyboard having normally 101 keys.
- The keyboard can also be used to type commands directing the computer to perform certain actions.
- Commands are typically chosen from an on-screen menu using a mouse, but there are often keyboard shortcuts for giving these same commands.
- Laptop computers, which don’t have room for large keyboards, often include a “fn” key so that other keys can perform double duty (such as having a numeric keypad function embedded within the main keyboard keys).
- Most keyboards attach to the PC via a PS/2 connector or USB port (newer).
Note :- The qwerty layout was designed for manual typewriters initially by Christopher Sholes all the way back in 1872.
In computing, a mouse is a pointing device that functions by detecting two-dimensional motion relative to its supporting surface. A mouse is a small device used to point to and select items on your computer screen. The mouse’s motion typically translates into the motion of a pointer on a display, which allows for fine control of a graphical user interface (GUI)
Note :– In 1963, the computer mouse was invented in 1963 by Stanford Research Center’s Douglas Engelbart. The first computer to come standard with the mouse was the Xerox PARC in 1970. This was also the first computer to use a graphical user interface. Some people wrongly assumed Xerox invented the mouse, but Engelbart and his team at Stanford had filed it in 1968.
- The trackball is sort of like an upside-down mouse, with the ball located on top.
- You use your fingers to roll the trackball, and internal rollers (similar to what’s inside a mouse) sense the motion which is transmitted to the computer.
- Trackballs have the advantage over mice in that the body of the trackball remains stationary on your desk, so you don’t need as much room to use the trackball.
- Trackball is an ideal device for CAD/CAM (Computer Aided Design / Computer Aided Manufacturing) applications, because a designer can move the graphics cursor with hand movements only without any movement of equipment. This is more appropriate to the style of working of designers and makes it easier for them to work on large drawings.
A joystick is an input device consisting of a stick that pivots on a base and reports its angle or direction to the device it is controlling. A joystick, also known as the control column, is the principal control device in the cockpit of many civilian and military aircraft, either as a centre stick or side-stick. It often has supplementary switches to control various aspects of the aircraft’s flight.
- A light pen, also called a selector pen, is a computer input device in the form of a light-sensitive wand used in conjunction with a computer’s display.
- It allows the user to point to displayed objects or draw on the screen in a similar way to a touch screen but with greater positional accuracy.
- Light pen uses a photoelectric (light sensitive) cell and optical lens mounted in a pen shaped case.
- The light sensitive cell and optical lens congregation is in such a way that it focuses on to it any light in its field of view.
- Users of Computer Aided Design (CAD) applications commonly use the light pens to directly draw on screen.
Most laptop computers today have a touch pad pointing device. You can move the on-screen cursor by sliding your finger along the surface of the touch pad. The buttons are located below the pad, but most touch pads allow you to perform “mouse clicks” by tapping on the pad itself.
Note :- According to “Smart Computing Encyclopedia”, the touchpad was invented by George E. Gerpheide in 1988. Apple Computer was the first to license and use the touch pad in its Powerbook laptops in 1994.
- A touch screen is an electronic visual display that can detect the presence and location of a touch within the display area.
- The term generally refers to touching the display of the device with a finger or hand. Touch screens can also sense other passive objects, such as a stylus. Touch screens are common in devices such as game consoles, all-in-one computers, tablet computers, and smart phones.
- The touch screen has two main attributes. First, it enables one to interact directly with what is displayed, rather than indirectly with a pointer controlled by a mouse or touchpad. Secondly, it lets one do so without requiring any intermediate device that would need to be held in the hand (other than a stylus, which is optional for most modern touch screens).
- Such displays can be attached to computers, or to networks as terminals.
- They also play an important role in the design of digital appliances such as the personal digital assistant (PDA), satellite navigation devices, mobile phones, ATM machine and video games.
- The first touch screen to be a capacitive touch screen invented by E.A. Johnson at the Royal Radar Establishment, Malvern, UK, around 1965 – 1967.
- In 1971, a “touch sensor” was developed by Doctor Sam Hurst.
- In 1974, the first true touch screen incorporating a transparent surface came on the scene developed by Sam Hurst and Elographics.
- In 1983, the computer manufacturing company, Hewlett-Packard introduced the HP-150, a home computer with touch screen technology.
- The nineties introduced smart phones and handhelds with touch screen technology. In 1993, Apple released the Newton PDA, equipped with handwriting recognition; and IBM released the first smart phone called Simon, which featured a calendar, note pad, and fax function, and a touch screen interface that allowed users to dial phone numbers.
In computing, an image scanner often abbreviated to just scanner is a device that optically scans images, printed text, handwriting, or an object, and converts it to a digital image.
- A flatbed scanner is usually composed of a glass pane (or platen), under which there is a bright light (often xenon or cold cathode fluorescent).
- To scan a document, a user has to place it upside down on the glass plate. A bright light below the glass plate moves horizontally from one end to another when activated.
- Hand scanners come in two forms: document and 3D scanners. Hand held document scanners are manual devices that are dragged across the surface of the image to be scanned.
- Scanning documents in this manner requires a steady hand, as an uneven scanning rate would produce distorted images – a little light on the scanner would indicate if the motion was too fast.
Optical Character Recognition (OCR)
- Optical character recognition, usually abbreviated to OCR, is the mechanical or electronic conversion of scanned images of handwritten, typewritten or printed text into machine-encoded text.
- OCR is widely used as a form of data entry from some sort of original paper data source, whether documents, sales receipts, mail, or any number of printed records.
- It is a common method of digitizing printed texts so that they can be electronically searched, stored more compactly, displayed on-line, and used in machine processes such as machine translation, text-to-speech and text mining. OCR is a field of research in pattern recognition, artificial intelligence and computer vision.
Optical Mark Recognition (OMR)
- Optical mark recognition (also called optical mark reading and OMR) is the process of capturing human-marked (for example – Pen or Pencil mark) data from document forms such as surveys and tests.
- The method used by an OMR device for recognition of marks on a document involves focusing a light beam on the document and detecting the reflected light pattern from the marks (for example – Pen or Pencil mark).
- A barcode reader (or barcode scanner) is an electronic device for reading printed barcodes.
- Like a flatbed scanner, it consists of a light source, a lens and a light sensor translating optical impulses into electrical ones. Additionally, nearly all barcode readers contain decoder circuitry analyzing the barcode’s image data provided by the sensor and sending the barcode’s content to the scanner’s output port.
Magnetic Ink Character Recognition (MICR)
- Magnetic Ink Character Recognition, or MICR, is a character recognition technology used primarily by the banking industry to facilitate the processing of cheque and makes up the routing number and account number at the bottom of a cheque.
- The technology allows computers to read information (such as account numbers) off printed documents. Unlike barcodes or similar technologies, however, MICR codes can be easily read by humans.
- MICR characters are printed in special typefaces with a magnetic ink or toner, usually containing iron oxide. As a machine decodes the MICR text, it first magnetizes the characters in the plane of the paper. Then the characters are passed over a MICR read head, a device similar to the playback head of a tape recorder. As each character passes over the head it produces a unique waveform that can be easily identified by the system.
- The “Microphones – Speech Recognition” is a speech Input device.
- A microphone can be attached to a computer to record sound (usually through a sound card input or circuitry built into the motherboard).
- To operate it we require using a microphone to talk to the computer. Also we need to add a sound card to the computer. The sound is digitized-turned into numbers that represent the original analog sound waves—and stored in the computer to later processing and playback.
A webcam is a video camera that feeds its images in real time to a computer or computer network, often via USB, Ethernet, or Wi-Fi.
Their most popular use is the establishment of video links, permitting computers to act as videophones or videoconference stations. This common use as a video camera for the World Wide Web gave the webcam its name. Other popular uses include security surveillance and computer vision and there are also uses on sites like video broadcasting services and for recording social videos.
- First developed in 1991, a webcam was pointed at the Trojan Room coffee pot in the Cambridge University Computer Science Department. The camera was finally switched off on August 22, 2001.
- The first commercial webcam, the black-and-white QuickCam, entered the marketplace in 1994, created by the U.S. computer company Connectix (which sold its product line to Logitech in 1998).