Sunday, March 23, 2008

Head Phone

Headphones

Headphones (also known as earphones, earbuds, stereophones, headsets, Handsfree phones or by the slang term cans or face plugs) are a pair of small loudspeakers, or less commonly a single speaker, with a way of holding them close to a user's ears and a means of connecting them to a stereophonic, monophonic or binaural audio-frequency signal source such as an audio amplifier, radio or CD player. In the context of telecommunication, the term headset is used to describe a combination of headphone and microphone used for two-way communication, for example with a telephone.


History

The telephone earpiece such as the one pictured at the right was common around the turn of the 20th century. From the earpiece developed the headphones. Sensitive headphones were the only way to listen to audio signals before amplifiers were developed.[1]



Brandes radio headphones, circa 1920

Very sensitive headphones such as those manufactured by Brandes around 1919 were commonly used for early radio work. These early headphones used moving iron drivers, either single ended or balanced armature. The requirement for high sensitivity meant no damping was used, thus the sound quality was crude. They also had very poor comfort compared to modern types, usually having no padding and too often having excessive clamping force to the head. Impedance varied, but 1,000 to 2,000 ohms was common, which suited both triodes and crystal sets.

When used with early powered radios, the headphone was normally connected to the positive high voltage battery terminal, and the other battery terminal was securely earthed. The use of bare electrical connections meant some users could be shocked if they touched the bare headphone connections while adjusting an uncomfortable headset.

At the time, pre-war headphones were called telephones, or sometimes phones, rather than headphones.

Headsets

Headphones that include a microphone are more commonly known as Headsets, and are more often targeted for communication than recreational listening.[2] Communication headsets include mainly three types, wired telephone, computer and mobile telephone. Telephone headsets are usually connected to a fixed-line (PSTN) telephone terminal, replacing its handset, so users can talk on the phone while they work on other things. Telephone headsets are very commonly used in Call Centers and offices, but are sometimes used at home. Computer Headsets come in two types, standard 3.5 mm plugs for connecting to sound-card of a PC, or USB connection. Both types are very commonly used for VoIP communication, but USB headsets tend to have better sound quality. Gamers might choose to use a headset in order to both hear the game sounds and talk to their fellow gamers. Computer users who employ speech recognition software often use a headset. Mobile phone headsets come in two types, wired and wireless. Wired mobile phone headsets are sometimes called as "Mobile Handsfree", and are basically a pair of earphones with a microphone module connected to the cable. Wireless mobile handsfree also comes in different types, and the most common type nowadays is the Bluetooth headset that rides on the ear. Wireless and wired intercom units used by the coaching staff of professional sports and for broadcast control and event management often employ a headset attachment


Electrostatic

Electrostatic loudspeaker diagram
Electrostatic loudspeaker diagram

Electrostatic drivers consist of a thin, electrically charged diaphragm, typically a coated PET film membrane, suspended between two perforated metal plates (electrodes). The electrical sound signal is applied to the electrodes creating an electrical field; depending on the polarity of this field, the diaphragm is drawn towards one of the plates. Air is forced through the perforations; combined with a continuously changing electrical signal driving the membrane, a sound wave is generated. Electrostatic headphones are usually more expensive than moving-coil ones, and are comparatively uncommon. In addition, a special amplifier is required to amplify the signal to deflect the membrane, which often requires electrical potentials in the range of 100 to 1000 volts.

Due to the extremely thin and light diaphragm membrane, often only a few micrometers thick, and the complete absence of moving metalwork, the frequency response of electrostatic headphones usually extends well above the audible limit of approximately 20 kHz. The high frequency response means that the low midband distortion level is maintained to the top of the audible frequency band[dubious ], which is generally not the case with moving coil drivers. Also, the frequency response peakiness regularly seen in the high frequency region with moving coil drivers is absent. The result is significantly better sound quality, if designed properly.

Electrostatic headphones are powered by anything from 100v to over 1kV, and are in proximity to a user's head. The usual method of making this safe is to limit the possible fault current to a low and safe value with resistors.

Uninterruptible Power Supply


Uninterruptible power supply


An uninterruptible power supply (UPS), also known as a continuous power supply (CPS) or a battery backup is a device which maintains a continuous supply of electric power to connected equipment by supplying power from a separate source when utility power is not available. It differs from an auxiliary power supply or standby generator, which does not provide instant protection from a momentary power interruption, however could be used to provide uninterrupted power to equipment for 1 - 20 minutes until a generator can be turned on. Integrated systems that have UPS and standby generator components are often referred to as emergency power systems.

There are three distinct UPS types :

  • off-line : remains idle until a power failure occurs, and then switches from utility power to its own power source, almost instantaneously.
  • line-interactive.
  • on-line : continuously powers the protected load from its energy reserves stored in a lead-acid battery or flywheel, while simultaneously replenishing the reserves from the AC power. It also provides protection against all common power problems, and for this reason it is also known as a power conditioner and a line conditioner.

While not limited to safeguarding any particular type of equipment, a UPS is typically used to protect computers, telecommunication equipment or other electrical equipment where an unexpected power disruption could cause injuries, fatalities, serious business disruption or data loss. UPS units come in sizes ranging from units which will back up a single computer without monitor (around 200 VA) to units which will power entire data centers or buildings (several megawatts).

Historically, UPSs were expensive and were most likely to be used on expensive computer systems and in areas where the power supply is interrupted frequently. As prices have fallen, UPS units have become an essential piece of equipment for data centers and business computers, and are also used for personal computers, entertainment systems and more.

Offline / standby

Offline / standby UPS. Typical protection time: 0 - 20 minutes. Capacity expansion: Usually not available

Offline / standby UPS. Typical protection time: 0 - 20 minutes. Capacity expansion: Usually not available

The Offline / Standby UPS offers only the most basic features, providing surge protection and battery backup. Usually the Standby UPS offers no battery capacity monitoring or self-test capability, making it the least reliable type of UPS since it could fail at any moment without warning. These are also the least expensive, selling for as little as US$75. The Standby UPS may be worse than using nothing at all, because it gives the user a false sense of security of being assurred protection that may not work when needed the most.

With this type of UPS, a user's equipment is normally connected directly to incoming utility power with the same voltage transient clamping devices used in a common surge protected plug strip connected across the power line. When the incoming utility voltage falls below a predetermined level the UPS turns on its internal DC-AC inverter circuitry, which is powered from an internal storage battery. The SBS then mechanically switches the connected equipment on to its DC-AC inverter output. The switch over time is stated by most manufacturers as being less than 4 milliseconds, but typically can be as long as 25 milliseconds depending on the amount of time it takes the Standby UPS to detect the lost utility voltage.

Line-interactive

Line-Interactive UPS. Typical protection time: 5 - 30 minutes. Capacity expansion: Several hours
Line-Interactive UPS. Typical protection time: 5 - 30 minutes. Capacity expansion: Several hours

The Line-Interactive UPS is similar in operation to a Standby UPS, but with the addition of a multi-tap variable-voltage autotransformer. This is a special type of electrical transformer that can add or subtract powered coils of wire, thereby increasing or decreasing the magnetic field and the output voltage of the transformer.

This type of UPS is able to tolerate continuous undervoltage brownouts and overvoltage surges without consuming the limited reserve battery power. It instead compensates by auto-selecting different power taps on the autotransformer. Changing the autotransformer tap can cause a very brief output power disruption, so the UPS may chirp for a moment, as it briefly switches to battery before changing the selected power tap.

Autotransformers can be engineered to cover a wide range of varying input voltages, but this also increases the number of taps and the size, weight, complexity, and expense of the UPS. It is common for the autotransformer to only cover a range from about 90v to 140v for 120v power, and then switch to battery if the voltage goes much higher or lower than that range.

In low-voltage conditions the UPS will use more current than normal so it may need a higher current circuit than a normal device. For example to power a 1000 watt device at 120 volts, the UPS will draw 8.32 amps. If a brownout occurs and the voltage drops to 100 volts, the UPS will draw 10 amps to compensate. This also works in reverse, so that in an overvoltage condition, the UPS will need fewer amps of current.

Scanner


Image scanner

In computing, a scanner is a device that optically scans images, printed text, handwriting, or an object, and converts it to a digital image. Common examples found in offices are variations of the desktop (or flatbed) scanner where the document is placed on a glass window for scanning. Hand-held scanners, where the device is moved by hand, have evolved from text scanning "wands" to 3D scanners used for industrial design, reverse engineering, test and measurement, orthotics, gaming and other applications. Mechanically driven scanners that move the document are typically used for large-format documents, where a flatbed design would be impractical.

Modern scanners typically use a charge-coupled device (CCD) or a Contact Image Sensor (CIS) as the image sensor, whereas older drum scanners use a photomultiplier tube as the image sensor. A rotary scanner, used for high-speed document scanning, is another type of drum scanner, using a CCD array instead of a photomultiplier. Other types of scanners are planetary scanners, which take photographs of books and documents, and 3D scanners, for producing three-dimensional models of objects.

Another category of scanner is digital camera scanners, which are based on the concept of reprographic cameras. Due to increasing resolution and new features such as anti-shake, digital cameras have become an attractive alternative to regular scanners. While still having disadvantages compared to traditional scanners (such as distortion, reflections, shadows, low contrast), digital cameras offer advantages such as speed, portability, gentle digitizing of thick documents without damaging the book spine. New scanning technologies are combining 3D scanners with digital cameras to create full-color, photo-realistic 3D models of objects.

Types

Drum

Drum scanners capture image information with photomultiplier tubes (PMT), rather than the charge-coupled device (CCD) arrays found in flatbed scanners and inexpensive film scanners. Reflective and transmissive originals are mounted on an acrylic cylinder, the scanner drum, which rotates at high speed while it passes the object being scanned in front of precision optics that deliver image information to the PMTs. Most modern color drum scanners use 3 matched PMTs, which read red, blue, and green light respectively. Light from the original artwork is split into separate red, blue, and green beams in the optical bench of the scanner.

The drum scanner gets its name from the large glass drum on which the original artwork is mounted for scanning: they usually take 11"x17" documents, but maximum size varies by manufacturer. One of the unique features of drum scanners is the ability to control sample area and aperture size independently. The sample size is the area that the scanner encoder reads to create an individual pixel. The aperture is the actual opening that allows light into the optical bench of the scanner. The ability to control aperture and sample size separately is particularly useful for smoothing film grain when scanning black-and white and color negative originals.

While drum scanners are capable of scanning both reflective and transmissive artwork, a good-quality flatbed scanner can produce excellent scans from reflective artwork. As a result, drum scanners are rarely used to scan prints now that high quality inexpensive flatbed scanners are readily available. Film, however, is where drum scanners continue to be the tool of choice for high-end applications. Because film can be wet-mounted to the scanner drum and because of the exceptional sensitivity of the PMTs, drum scanners are capable of capturing very subtle details in film originals.

Only a few companies continue to manufacture drum scanners. While prices of both new and used units have come down over the last decade, they still require a considerable monetary investment when compared to CCD flatbed and film scanners. However, drum scanners remain in demand due to their capacity to produce scans that are superior in resolution, color gradation, and value structure. Also, since drum scanners are capable of resolutions up to 12,000 PPI, their use is generally recommended when a scanned image is going to be enlarged.

In most graphic-arts operations, very-high-quality flatbed scanners have replaced drum scanners, being both less expensive and faster. However, drum scanners continue to be used in high-end applications, such as museum-quality archiving of photographs and print production of high-quality books and magazine advertisements. In addition, due to the greater availability of pre-owned units many fine-art photographers are acquiring drum scanners, which has created a new niche market for the machines.

Pen Drive

USB flash drive

A USB flash drive is a NAND-type flash memory data storage device integrated with a USB (universal serial bus) interface. USB flash drives are typically removable and rewritable, much shorter than a floppy disk (1 to 4 inches or 2.5 to 10 cm), and weigh less than 2 ounces (60 g). Storage capacities typically range from 64 MB to 64 GB[1] with steady improvements in size and price per gigabyte. Some allow 1 million write or erase cycles[2][3] and have 10-year data retention,[4] connected by USB 1.1 or USB 2.0. USB Memory card readers are also available, whereby rather than being built-in, the memory is a removable flash memory card housed in what is otherwise a regular USB flash drive, as described below.

USB flash drives offer potential advantages over other portable storage devices, particularly the floppy disk. They are more compact, faster, hold much more data, have a more durable design, and are more reliable for lack of moving parts. Additionally, it has become increasingly common for computers to ship without floppy disk drives. USB ports, on the other hand, appear on almost every current mainstream PC and laptop. These types of drives use the USB mass storage standard, supported natively by modern operating systems such as Windows, Mac OS X, Linux, and other Unix-like systems. USB drives with USB 2.0 support can also be faster than an optical disc drive, while storing a larger amount of data in a much smaller space.

Nothing actually moves in a flash drive: it is called a drive because it is designed to read and write data using the same system commands as a mechanical disk drive, appearing to the computer operating system and user interface as just another drive.[3]

http://www.uberreview.com/wp-content/uploads/2006/12/mickey-mouse-usb-flash-drive.jpg

A flash drive consists of a small printed circuit board protected inside a plastic, metal, or rubberised case, robust enough to be carried with no additional protection, in a pocket or on a key chain for example. The USB connector is protected by a removable cap or by retracting into the body of the drive, although it is not liable to be damaged if exposed. Most flash drives use a standard type-A USB connection allowing them to be plugged into a port on a personal computer.

To access the drive it must be connected to a USB port, which powers the drive and allows it to send and receive data. Some flash drives, especially high-speed drives, may require more power than the limited amount provided by a bus-powered USB hub, such as those built into some computer keyboards or monitors. These drives will not work properly unless plugged directly into a host controller (i.e., the ports found on the computer itself) or a self-powered hub.

Design and implementation

One end of the device is fitted with a single male type-A USB connector. Inside the plastic casing is a small printed circuit board. Mounted on this board is some simple power circuitry and a small number of surface-mounted integrated circuits (ICs). Typically, one of these ICs provides an interface to the USB port, another drives the onboard memory, and the other is the flash memory.

Drives typically use the USB mass storage device class to communicate with the host.


Internals of a typical USB flash drive

1 USB connector
2 USB mass storage controller device
3 Test points
4 Flash memory chip
5 Crystal oscillator
6 LED
7 Write-protect switch
8 Space for second flash memory chip

[edit] Essential components

There are typically four parts to a flash drive:

  • Male type-A USB connector — provides an interface to the host computer.
  • USB mass storage controller — implements the USB host controller. The controller contains a small microcontroller with a small amount of on-chip ROM and RAM.
  • NAND flash memory chip — stores data. NAND flash is typically also used in digital cameras.
  • Crystal oscillator — produces the device's main 12 MHz clock signal and controls the device's data output through a phase-locked loop.

[edit] Additional components

The typical device may also include:

  • Jumpers and test pins — for testing during the flash drive's manufacturing or loading code into the microprocessor.
  • LEDs — indicate data transfers or data reads and writes.
  • Write-protect switches — indicate whether the device should be in "write-protection" mode.
  • Unpopulated space — provides space to include a second memory chip. Having this second space allows the manufacturer to develop only one printed circuit board that can be used for more than one storage size device, to meet the needs of the market.
  • USB connector cover or cap — reduces the risk of damage and prevents the ingress of fluff or other contaminants, and improves overall device appearance. Some flash drives do not feature a cap, but instead have retractable USB connectors. Other flash drives have a "swivel" cap that is permanently connected to the drive itself and eliminates the chance of losing the cap.
  • Transport aid — the cap or the main body often contains a hole suitable for connection to a key chain or lanyard.

Modem

Modem

Modem (from modulator-demodulator) is a device that modulates an analog carrier signal to encode digital information, and also demodulates such a carrier signal to decode the transmitted information. The goal is to produce a signal that can be transmitted easily and decoded to reproduce the original digital data. Modems can be used over any means of transmitting analog signals, from driven diodes to radio.

ncreasing speeds (V.21 V.22 V.22bis)

A 2400 bit/s modem for a laptop.

A 2400 bit/s modem for a laptop.

The 300 bit/s modems used frequency-shift keying to send data. In this system the stream of 1s and 0s in computer data is translated into sounds which can be easily sent on the phone lines. In the Bell 103 system the originating modem sends 0s by playing a 1070 Hz tone, and 1s at 1270 Hz, with the answering modem putting its 0s on 2025 Hz and 1s on 2225 Hz. These frequencies were chosen carefully, they are in the range that suffer minimum distortion on the phone system, and also are not harmonics of each other.

In the 1200 bit/s and faster systems, phase-shift keying was used. In this system the two tones for any one side of the connection are sent at the similar frequencies as in the 300 bit/s systems, but slightly out of phase. By comparing the phase of the two signals, 1s and 0s could be pulled back out, for instance if the signals were 90 degrees out of phase, this represented two digits, "1,0", at 180 degrees it was "1,1". In this way each cycle of the signal represents two digits instead of one. 1200 bit/s modems were, in effect, 600 symbols per second modems (600 baud modems) with 2 bits per symbol.

Voiceband modems generally remained at 300 and 1200 bit/s (V.21 and V.22) into the mid 1980s. A V.22bis 2400-bit/s system similar in concept to the 1200-bit/s Bell 212 signalling was introduced in the U.S., and a slightly different one in Europe. By the late 1980s, most modems could support all of these standards and 2400-bit/s operation was becoming common.

For more information on baud rates versus bit rates, see the companion article List of device bandwidths.

Broadband

ADSL modems, a more recent development, are not limited to the telephone's "voiceband" audio frequencies. Some ADSL modems use coded orthogonal frequency division modulation (DMT).

Cable modems use a range of frequencies originally intended to carry RF television channels. Multiple cable modems attached to a single cable can use the same frequency band, using a low-level media access protocol to allow them to work together within the same channel. Typically, 'up' and 'down' signals are kept separate using frequency division multiple access.

New types of broadband modems are beginning to appear, such as doubleway satellite and power line modems.

Broadband modems should still be classed as modems, since they use complex waveforms to carry digital data. They are more advanced devices than traditional dial-up modems as they are capable of modulating/demodulating hundreds of channels simultaneously.

Many broadband modems include the functions of a router (with Ethernet and WiFi ports) and other features such as DHCP, NAT and firewall features.

When broadband technology was introduced, networking and routers were unfamiliar to consumers. However, many people knew what a modem was as most internet access was through dial-up. Due to this familiarity, companies started selling broadband modems using the familiar term "modem" rather than vaguer ones like "adapter" or "transceiver".

Many broadband modems must be configured in bridge mode before they can use a router.

Processor

Microprocessor

A microprocessor incorporates most or all of the functions of a central processing unit (CPU) on a single integrated circuit (IC). [1] The first microprocessors emerged in the early 1970s and were used for electronic calculators, using BCD arithmetic on 4-bit words. Other embedded uses of 4 and 8-bit microprocessors, such as terminals, printers, various kinds of automation etc, followed rather quickly. Affordable 8-bit microprocessors with 16-bit addressing also led to the first general purpose microcomputers in the mid-1970s.

Computer processors were for a long period constructed out of small and medium-scale ICs containing the equivalent of a few to a few hundred transistors. The integration of the whole CPU onto a single VLSI chip therefore greatly reduced the cost of processing capacity. From their humble beginnings, continued increases in microprocessor capacity have rendered other forms of computers almost completely obsolete (see history of computing hardware), with one or more microprocessor as processing element in everything from the smallest embedded systems and handheld devices to the largest mainframes and supercomputers.

Since the early 1970s, the increase in capacity of microprocessors has been known to generally follow Moore's Law, which suggests that the complexity of an integrated circuit, with respect to minimum component cost, doubles every 18 months. In the late 1990s, heat generation (TDP), due to current leakage and other factors, emerged as a leading developmental constraint[2].

Notable 8-bit designs

The 4004 was later followed in 1972 by the 8008, the world's first 8-bit microprocessor. These processors are the precursors to the very successful Intel 8080 (1974), Zilog Z80 (1976), and derivative Intel 8-bit processors. The competing Motorola 6800 was released August 1974. Its architecture was cloned and improved in the MOS Technology 6502 in 1975, rivaling the Z80 in popularity during the 1980s.

Both the Z80 and 6502 concentrated on low overall cost, by combining small packaging, simple computer bus requirements, and including circuitry that normally must be provided in a separate chip (example: the Z80 included a memory controller). It was these features that allowed the home computer "revolution" to accelerate sharply in the early 1980s, eventually delivering such inexpensive machines as the Sinclair ZX-81, which sold for US$99.

The Western Design Center, Inc. (WDC) introduced the CMOS 65C02 in 1982 and licensed the design to several firms. It became the core of the Apple IIc and IIe personal computers, medical implantable grade pacemakers and defibrilators, automotive, industrial and consumer devices. WDC pioneered the licensing of microprocessor technology which was later followed by ARM and other microprocessor Intellectual Property (IP) providers in the 1990’s.

Motorola trumped the entire 8-bit market by introducing the MC6809 in 1978, arguably one of the most powerful, orthogonal, and clean 8-bit microprocessor designs ever fielded – and also one of the most complex hard-wired logic designs that ever made it into production for any microprocessor. Microcoding replaced hardwired logic at about this time for all designs more powerful than the MC6809 – because the design requirements were getting too complex for hardwired logic.

Another early 8-bit microprocessor was the Signetics 2650, which enjoyed a brief surge of interest due to its innovative and powerful instruction set architecture.

A seminal microprocessor in the world of spaceflight was RCA's RCA 1802 (aka CDP1802, RCA COSMAC) (introduced in 1976) which was used in NASA's Voyager and Viking spaceprobes of the 1970s, and onboard the Galileo probe to Jupiter (launched 1989, arrived 1995). RCA COSMAC was the first to implement C-MOS technology. The CDP1802 was used because it could be run at very low power, and because its production process (Silicon on Sapphire) ensured much better protection against cosmic radiation and electrostatic discharges than that of any other processor of the era. Thus, the 1802 is said to be the first radiation-hardened microprocessor.

The RCA 1802 had what is called a static design, meaning that the clock frequency could be made arbitrarily low, even to 0 Hz, a total stop condition. This let the Voyager/Viking/Galileo spacecraft use minimum electric power for long uneventful stretches of a voyage. Timers and/or sensors would awaken/speed up the processor in time for important tasks, such as navigation updates, attitude control, data acquisition, and radio communication.

Multicore designs

A different approach to improving a computer's performance is to add extra processors, as in symmetric multiprocessing designs which have been popular in servers and workstations since the early 1990s. Keeping up with Moore's Law is becoming increasingly challenging as chip-making technologies approach the physical limits of the technology.

In response, the microprocessor manufacturers look for other ways to improve performance, in order to hold on to the momentum of constant upgrades in the market.

A multi-core processor is simply a single chip containing more than one microprocessor core, effectively multiplying the potential performance with the number of cores (as long as the operating system and software is designed to take advantage of more than one processor). Some components, such as bus interface and second level cache, may be shared between cores. Because the cores are physically very close they interface at much faster clock speeds compared to discrete multiprocessor systems, improving overall system performance.

In 2005, the first mass-market dual-core processors were announced and as of 2007 dual-core processors are widely used in servers, workstations and PCs while quad-core processors are now available for high-end applications in both the home and professional environments.

Sun Microsystems has released the Niagara and Niagara 2 chips, both of which feature an eight-core design. The Niagara 2 supports more threads and operates at 1.6 GHz.

High-end Intel Xeon processors that are on the LGA771 socket are DP (dual processor) capable, as well as the new Intel Core 2 Extreme QX9775 also used in the Mac Pro by Apple and the Intel Skulltrail motherboard.

Printer

Printer (computing)

In computing, a printer is a peripheral which produces a hard copy (permanent human-readable text and/or graphics) of documents stored in electronic form, usually on physical print media such as paper or transparencies. Many printers are primarily used as local peripherals, and are attached by a printer cable or, in most newer printers, a USB cable to a computer which serves as a document source. Some printers, commonly known as network printers, have built-in network interfaces (typically wireless or Ethernet), and can serve as a hardcopy device for any user on the network. Individual printers are often designed to support both local and network connected users at the same time.

In addition, a few modern printers can directly interface to electronic media such as memory sticks or memory cards, or to image capture devices such as digital cameras, scanners; some printers are combined with a scanners and/or fax machines in a single unit. Printers that include non-printing features are sometimes called Multifunction Printers (MFP), Multi-Function Devices (MFD), or All-In-One (AIO) printers.

A printer which is combined with a scanner can function as a kind of photocopier if so designed. Most MFPs include printing, scanning, and copying among their features. Printers are designed for low-volume, short-turnaround print jobs; requiring virtually no setup time to achieve a hard copy of a given document. However, printers are generally slow devices (30 pages per minute is considered fast; and many consumer printers are far slower than that), and the cost-per-page is relatively high. In contrast, the printing press (which serves much the same function), is designed and optimized for high-volume print jobs such as newspaper print runs--printing presses are capable of hundreds of pages per minute or more, and have an incremental cost-per-page which is a fraction of that of printers.

The printing press remains the machine of choice for high-volume, professional publishing. However, as printers have improved in quality and performance, many jobs which used to be done by professional print shops are now done by users on local printers; see desktop publishing. The world's first computer printer was a 19th century mechanically driven apparatus invented by Charles Babbage for his Difference Engine.

Printing technology

Printers are routinely classified by the underlying print technology they employ; numerous such technologies have been developed over the years.

The choice of print engine has a substantial effect on what jobs a printer is suitable for, as different technologies are capable of different levels of image/text quality, print speed, low cost, noise; in addition, some technologies are inappropriate for certain types of physical media (such as carbon paper or transparencies).

Key board


Keyboard (computing)

In computing, a keyboard is an input device partially modelled after the typewriter keyboard which uses an arrangement of buttons, or keys which act as electronic switches. A keyboard typically has characters engraved or printed on the keys, and each press of a key typically corresponds to a single written symbol. However, to produce some symbols requires pressing and holding several keys simultaneously or in sequence. While most keyboard keys produce letters, numbers or signs (characters), other keys or simultaneous key presses can produce actions or computer commands.

In normal usage, the keyboard is used to type text or numbers into a word processor, text editor, or other program. In a modern computer the interpretation of keypresses is generally left to the software. A computer keyboard distinguishes each physical key from every other and reports all keypresses to the controlling software. Keyboards are also used for computer gaming, either with regular keyboards or by using special gaming keyboards which can expedite frequently used keystroke combinations. A keyboard is also used give commands to the operating system of a computer, such as the Control-Alt-Delete combination, which brings up a task window or shuts down the machine.

http://www.uberreview.com/wp-content/uploads/2006/04/virtual_keyboard.jpg


Standard keyboards

Standard keyboards such as the 104-key Windows keyboards include alphabetic characters, punctuation symbols, numbers, and a variety of function keys. The internationally-common 102/105 key keyboards have a smaller 'left shift' key and an additional key with some more symbols between that and the letter to its right (usually Z or Y).[1]

Keyboards with extra keys such as multimedia keyboards have special keys for accessing music, web, and other oft-used programs, a mute button, volume buttons or knob, and standby (sleep) button. gaming keyboards have extra function keys which can be programmed with keystroke macros. For example, ctrl+shift+y could be a keystroke that is frequently used in a certain computer game. Shortcuts marked on color-coded keys are used for some software applications and for specialized for uses including word processing, video editing, graphic design, and audio editing.

Multimedia keyboards have special keys for accessing music, websites, and computer programs.

Multimedia keyboards have special keys for accessing music, websites, and computer programs.

Smaller keyboards have been introduced for laptops, PDAs, cellphones, or users who have a limited workspace. The size of a standard keyboard is dictated by the practical consideration that the keys must be large enough to be easily pressed by fingers. To reduce the size of the keyboard, the numeric keyboard to the right of the alphabetic keyboard can be removed, or the size of the keys can be reduced, which makes it harder to enter text. Another way to reduce the size of the keyboard is to reduce the number of keys and use chording keyer, i.e. pressing several keys simultaneously. For example, the GKOS keyboard has been designed for small wireless devices. Other two-handed alternatives more akin to a game controller, such as the AlphaGrip, are also used as a way to input data and text. Another way to reduce the size of a keyboard is to use smaller buttons and pack them closer together. Such keyboards, often called a "thumbboard" (thumbing) are used in some personal digital assistants such as the Treo and BlackBerry and some Ultra-Mobile PCs such as the OQO.

Keyboards on laptops such as this Sony VAIO usually have a shorter travel distance for the keystroke and a reduced set of keys.

Keyboards on laptops such as this Sony VAIO usually have a shorter travel distance for the keystroke and a reduced set of keys.

Numeric keyboards contain only numbers, mathematical symbols for addition, subtraction, multiplication, and division, a decimal point, and several function keys (e.g. End, Delete, etc.). They are often used to facilitate data entry with smaller keyboard-equipped laptops or with smaller keyboards that do not have a numeric keypad.

Non-standard or special-use types

A keyset or chorded keyboard (also called a chord keyboard or chording keyboard) is a computer input device that allows the user to enter characters or commands formed by pressing several keys together, like playing a "chord" on a piano. The large number of combinations available from a small number of keys allows text or commands to be entered with one hand, leaving the other hand free to do something else. A secondary advantage is that it can be built into a device (such as a pocket-sized computer) that is too small to contain a normal sized keyboard. A chorded keyboard designed to be used while held in the hand is called a keyer.

The Microwriter MW4 (circa 1980) uses a chording keyboard in which several key presses are needed for each letter.

The Microwriter MW4 (circa 1980) uses a chording keyboard in which several key presses are needed for each letter.

Virtual keyboards, such as the I-Tech Virtual Laser Keyboard, project an image of a full-size keyboard onto a surface. Sensors in the projection unit identify which key is being "pressed" and relay the signals to a computer or personal digital assistant. There is also a virtual keyboard, the On-Screen Keyboard, for use on WIndows.

Touchscreens such as with the iPhone and the OLPC laptop can be used as a keyboard. (The OLPC initiative's second computer will be effectively two tablet touchscreens hinged together like a book. It can be used as a convertible tablet PC where the keyboard is one half-screen (one side of the book) which turns into a touchscreen virtual keyboard.)

A foldable keyboard.

A foldable keyboard.

Foldable keyboards are made of soft plastic which can be rolled or folded over for travel. When in use, the keyboard can conform to uneven surfaces, and it is more resistant to liquids than a standard keyboard.

http://img.alibaba.com/photo/320011464/LED_computer_keyboard.summ.jpg

Technology

Key switches

"Dome-switch" keyboards (sometimes incorrectly referred to as a membrane keyboards) are the most common type in use in the 2000s. When a key is pressed, it pushes down on a rubber dome sitting beneath the key. A conductive contact on the underside of the dome touches (and hence connects) a pair of conductive lines on the circuit below. This bridges the gap between them and allows electric current to flow (the open circuit is closed). A scanning signal is emitted by the chip along the pairs of lines in the matrix circuit which connects to all the keys. When the signal in one pair becomes different, the chip generates a "make code" corresponding to the key connected to that pair of lines. Keycaps are also required for most types of keyboards; while modern keycaps are typically surface-marked, they can also be 2-shot molded, or engraved, or they can be made of transparent material with printed paper inserts

Keys on older IBM keyboards were made with a "buckling spring" mechanism, in which a coil spring under the key buckles under pressure from the user's finger, pressing a rubber dome, whose inside is coated with conductive graphite, which connects two leads below, completing a circuit. This produces a clicking sound, and gives physical feedback for the typist indicating that the key has been depressed.[2][3]When a key is pressed and the circuit is completed, the code generated is sent to the computer either via a keyboard cable (using on-off electrical pulses to represent bits) or over a wireless connection.

mouse


Mouse (computing)


In computing, a mouse (plural mice, mouse devices, or mouses) is a pointing device that functions by detecting two-dimensional motion relative to its supporting surface. Physically, a mouse consists of a small case, held under one of the user's hands, with one or more buttons. It sometimes features other elements, such as "wheels", which allow the user to perform various system-dependent operations, or extra buttons or features can add more control or dimensional input. The mouse's motion typically translates into the motion of a pointer on a display, which allows for fine control of a Graphical User Interface.

The name mouse, originated at the Stanford Research Institute, derives from the resemblance of early models (which had a cord attached to the rear part of the device, suggesting the idea of a tail) to the common mouse.[1].

The first marketed integrated mouse — shipped as a part of a computer and intended for personal computer navigation — came with the Xerox 8010 Star Information System in 1981.

Modern optical mice

http://lib.store.yahoo.net/lib/directron/fatal1ty101003.jpg

Modern surface-independent optical mice work by using an optoelectronic sensor to take successive pictures of the surface on which the mouse operates. As computing power grew cheaper, it became possible to embed more powerful special-purpose image-processing chips in the mouse itself. This advance enabled the mouse to detect relative motion on a wide variety of surfaces, translating the movement of the mouse into the movement of the pointer and eliminating the need for a special mouse-pad. This advance paved the way for widespread adoption of optical mice. Optical mice illuminate the surface that they track over, using an LED or a laser diode. Changes between one frame and the next are processed by the image processing part of the chip and translated into movement on the two axes using an optical flow estimation algorithm. For example, the Avago Technologies ADNS-2610 optical mouse sensor processes 1512 frames per second: each frame consisting of a rectangular array of 18×18 pixels, and each pixel can sense 64 different levels of gray.[23]

[edit] Infrared Optical mice

Some newer optical mice including some from logitech's lx series use an infrared sensor instead of a light emitting diode. This saves power and can be more accurate.

[edit] Laser mice

The laser mouse uses an infrared laser diode instead of an LED to illuminate the surface beneath their sensor. As early as 1998, Sun Microsystems provided a laser mouse with their Sun SPARCstation servers and workstations.[24] However, laser mice did not enter the mainstream market until 2004, when Logitech, in partnership with Agilent Technologies, introduced its MX 1000 laser mouse.[25] This mouse uses a small infrared laser instead of an LED and has significantly increased the resolution of the image taken by the mouse. The laser enables around 20 times more surface tracking power to the surface features used for navigation compared to conventional optical mice, via interference effects. While the implementation of a laser slightly increases sensitivity and resolution, the main advantage comes from power usage.

3D mice

Also known as flying mice, bats, or wands, these devices generally function through ultrasound. Probably the best known example would be 3DConnexion/Logitech's SpaceMouse from the early 1990s.

http://www.chinawholesalegift.com/pic/Electrical-Gifts/Computer-Hardware-Accessories/Mouse/Optical-Mouse/Fabulous-mini-optical-mouse-21342874624.jpg

In the late 1990s Kantek introduced the 3D RingMouse. This wireless mouse was worn on a ring around a finger, which enabled the thumb to access three buttons. The mouse was tracked in three dimensions by a base station.[30] Despite a certain appeal, it was finally discontinued because it did not provide sufficient resolution.

A recent consumer 3D pointing device is the Wii Remote. While primarily a motion-sensing device (that is, it can determine its orientation and direction of movement), Wii Remote can also detect its spatial position by comparing the distance and position of the lights from the IR emitter using its integrated IR camera (since the nunchuk lacks a camera, it can only tell its current heading and orientation). The obvious drawback to this approach is that it can only produce spatial coordinates while its camera can see the sensor bar.

In February, 2008, at the Game Developers' Conference (GDC), a company called Motion4U introduced a 3D mouse add-on called "OptiBurst" for Autodesk's Maya application. The mouse allows users to work in true 3D with 6 degrees of freedom.[citation needed] The primary advantage of this system is speed of development with organic (natural) movement.


Apple Desktop Bus

http://www.globalsourcesdirect.com/catalog/GSD-CP001.jpgApple Macintosh Plus mice, 1986.

In 1986 Apple first implemented the Apple Desktop Bus allowing the daisy-chaining together of up to 16 devices, including arbitrarily many mice and other devices on the same bus with no configuration whatsoever. Featuring only a single data pin, the bus used a purely polled approach to computer/mouse communications and survived as the standard on mainstream models (including a number of non-Apple workstations) until 1998 when iMac began the industry-wide switch to using USB. Beginning with the "Bronze Keyboard" PowerBook G3 in May 1999, Apple dropped the external ADB port in favor of USB, but retained an internal ADB connection in the PowerBook G4 for communication with its built-in keyboard and trackpad until early 2005.

Tactile mice

In 2000, Logitech introduced the "tactile mouse", which contained a small actuator that made the mouse vibrate. Such a mouse can augment user-interfaces with haptic feedback, such as giving feedback when crossing a window boundary. To surf by touch requires the user to be able to feel depth or hardness; this ability was realized with the first electrorheological tactile mice[36] but never marketed.

Applications of mice in user-interfaces

Computer-users usually utilize a mouse to control the motion of a cursor in two dimensions in a graphical user interface. Clicking or hovering can select files, programs or actions from a list of names, or (in graphical interfaces) through pictures called "icons" and other elements. For example, a text file might be represented by a picture of a paper notebook, and clicking while the pointer hovers this icon might cause a text editing program to open the file in a window. (See also point-and-click)

Users can also employ mice gesturally; meaning that a stylized motion of the mouse cursor itself, called a "gesture", can issue a command or map to a specific action. For example, in a drawing program, moving the mouse in a rapid "x" motion over a shape might delete the shape.

Gestural interfaces occur more rarely than plain pointing-and-clicking; and people often find them more difficult to use, because they require finer motor-control from the user. However, a few gestural conventions have become widespread, including the drag-and-drop gesture, in which:

  1. The user presses the mouse button while the mouse cursor hovers over an interface object
  2. The user moves the cursor to a different location while holding the button down
  3. The user releases the mouse button

For example, a user might drag-and-drop a picture representing a file onto a picture of a trash-can, thus instructing the system to delete the file.

Other uses of the mouse's input occur commonly in special application-domains. In interactive three-dimensional graphics, the mouse's motion often translates directly into changes in the virtual camera's orientation. For example, in the first-person shooter genre of games (see below), players usually employ the mouse to control the direction in which the virtual player's "head" faces: moving the mouse up will cause the player to look up, revealing the view above the player's head.

When mice have more than one button, software may assign different functions to each button. Often, the primary (leftmost in a right-handed configuration) button on the mouse will select items, and the secondary (rightmost in a right-handed) button will bring up a menu of alternative actions applicable to that item. For example, on platforms with more than one button, the Mozilla web browser will follow a link in response to a primary button click, will bring up a contextual menu of alternative actions for that link in response to a secondary-button click, and will often open the link in a new tab or window in response to a click with the tertiary (middle) mouse button.

Hard disk


Hard disk drive

A hard disk drive (HDD), commonly referred to as a hard drive, hard disk, or fixed disk drive,[1] is a non-volatile storage device which stores digitally encoded data on rapidly rotating platters with magnetic surfaces. Strictly speaking, "drive" refers to a device distinct from its medium, such as a tape drive and its tape, or a floppy disk drive and its floppy disk. Early HDDs had removable media; however, an HDD today is typically a sealed unit (except for a filtered vent hole to equalize air pressure) with fixed media.[2]

Originally, the term "hard" was temporary slang, substituting "hard" for "rigid", before these drives had an established and universally-agreed-upon name. A HDD is a rigid-disk drive although it is rarely referred to as such. By way of comparison, a floppy drive (more formally, a diskette drive) has a disc that is flexible. Some time ago, IBM's internal company term for a HDD was "file".
http://www.metallurgy.utah.edu/galleries/hardisk3.jpg/variant/medium

HDDs (introduced in 1956 as data storage for an IBM accounting computer[3]) were originally developed for use with general purpose computers; see History of hard disk drives.

In the 21st century, applications for HDDs have expanded to include digital video recorders, digital audio players, personal digital assistants, digital cameras and video game consoles. In 2005 the first mobile phones to include HDDs were introduced by Samsung and Nokia.

http://upload.wikimedia.org/wikipedia/commons/a/a2/Harddisk-full.jpg

The need for large-scale, reliable storage, independent of a particular device, led to the introduction of configurations such as RAID arrays, network attached storage (NAS) systems and storage area network (SAN) systems that provide efficient and reliable access to large volumes of data. Note that although not immediately recognizable as a computer, all the aforementioned applications are actually embedded computing devices of some sort.

Capacity and access speed

PC hard disk drive capacity (in GB). The vertical axis is logarithmic, so the fit line corresponds to exponential growth.

Using rigid disks and sealing the unit allows much tighter tolerances than in a floppy disk drive. Consequently, hard disk drives can store much more data than floppy disk drives and can access and transmit it faster. As of January 2008:

  • A typical desktop HDD, might store between 120 and 1000 GB of data (based on US market data[5]), rotate at 5,400 to 10,000 rpm and have a media transfer rate of 1 Gbit/s or higher[citation needed]. (1 GB = 109 B; 1 Gbit/s = 109 bit/s)
  • As of July 2008, the highest capacity HDDs are 1.5 TB[6].
  • The fastest “enterprise” HDDs spin at 10,000 or 15,000 rpm, and can achieve sequential media transfer speeds above 1.6 Gbit/s.[7] and a sustained transfer rate up to 125 MBytes/second.[7] Drives running at 10,000 or 15,000 rpm use smaller platters because of air drag and therefore generally have lower capacity than the highest capacity desktop drives. Gamers may choose to get 10000 rpm Hard Drives for Gaming PCs due to the fast transfer rate substantially decreasing game load times (and startup times).[citation needed]
  • Mobile, i.e., laptop HDDs, which are physically smaller than their desktop and enterprise counterparts, tend to be slower and have less capacity. A typical mobile HDD spins at 5,400 rpm, with 7,200 rpm models available for a slight price premium. Because of the smaller disks, mobile HDDs generally have lower capacity than the highest capacity desktop drives.