Sunday, March 23, 2008

mouse


Mouse (computing)


In computing, a mouse (plural mice, mouse devices, or mouses) is a pointing device that functions by detecting two-dimensional motion relative to its supporting surface. Physically, a mouse consists of a small case, held under one of the user's hands, with one or more buttons. It sometimes features other elements, such as "wheels", which allow the user to perform various system-dependent operations, or extra buttons or features can add more control or dimensional input. The mouse's motion typically translates into the motion of a pointer on a display, which allows for fine control of a Graphical User Interface.

The name mouse, originated at the Stanford Research Institute, derives from the resemblance of early models (which had a cord attached to the rear part of the device, suggesting the idea of a tail) to the common mouse.[1].

The first marketed integrated mouse — shipped as a part of a computer and intended for personal computer navigation — came with the Xerox 8010 Star Information System in 1981.

Modern optical mice

http://lib.store.yahoo.net/lib/directron/fatal1ty101003.jpg

Modern surface-independent optical mice work by using an optoelectronic sensor to take successive pictures of the surface on which the mouse operates. As computing power grew cheaper, it became possible to embed more powerful special-purpose image-processing chips in the mouse itself. This advance enabled the mouse to detect relative motion on a wide variety of surfaces, translating the movement of the mouse into the movement of the pointer and eliminating the need for a special mouse-pad. This advance paved the way for widespread adoption of optical mice. Optical mice illuminate the surface that they track over, using an LED or a laser diode. Changes between one frame and the next are processed by the image processing part of the chip and translated into movement on the two axes using an optical flow estimation algorithm. For example, the Avago Technologies ADNS-2610 optical mouse sensor processes 1512 frames per second: each frame consisting of a rectangular array of 18×18 pixels, and each pixel can sense 64 different levels of gray.[23]

[edit] Infrared Optical mice

Some newer optical mice including some from logitech's lx series use an infrared sensor instead of a light emitting diode. This saves power and can be more accurate.

[edit] Laser mice

The laser mouse uses an infrared laser diode instead of an LED to illuminate the surface beneath their sensor. As early as 1998, Sun Microsystems provided a laser mouse with their Sun SPARCstation servers and workstations.[24] However, laser mice did not enter the mainstream market until 2004, when Logitech, in partnership with Agilent Technologies, introduced its MX 1000 laser mouse.[25] This mouse uses a small infrared laser instead of an LED and has significantly increased the resolution of the image taken by the mouse. The laser enables around 20 times more surface tracking power to the surface features used for navigation compared to conventional optical mice, via interference effects. While the implementation of a laser slightly increases sensitivity and resolution, the main advantage comes from power usage.

3D mice

Also known as flying mice, bats, or wands, these devices generally function through ultrasound. Probably the best known example would be 3DConnexion/Logitech's SpaceMouse from the early 1990s.

http://www.chinawholesalegift.com/pic/Electrical-Gifts/Computer-Hardware-Accessories/Mouse/Optical-Mouse/Fabulous-mini-optical-mouse-21342874624.jpg

In the late 1990s Kantek introduced the 3D RingMouse. This wireless mouse was worn on a ring around a finger, which enabled the thumb to access three buttons. The mouse was tracked in three dimensions by a base station.[30] Despite a certain appeal, it was finally discontinued because it did not provide sufficient resolution.

A recent consumer 3D pointing device is the Wii Remote. While primarily a motion-sensing device (that is, it can determine its orientation and direction of movement), Wii Remote can also detect its spatial position by comparing the distance and position of the lights from the IR emitter using its integrated IR camera (since the nunchuk lacks a camera, it can only tell its current heading and orientation). The obvious drawback to this approach is that it can only produce spatial coordinates while its camera can see the sensor bar.

In February, 2008, at the Game Developers' Conference (GDC), a company called Motion4U introduced a 3D mouse add-on called "OptiBurst" for Autodesk's Maya application. The mouse allows users to work in true 3D with 6 degrees of freedom.[citation needed] The primary advantage of this system is speed of development with organic (natural) movement.


Apple Desktop Bus

http://www.globalsourcesdirect.com/catalog/GSD-CP001.jpgApple Macintosh Plus mice, 1986.

In 1986 Apple first implemented the Apple Desktop Bus allowing the daisy-chaining together of up to 16 devices, including arbitrarily many mice and other devices on the same bus with no configuration whatsoever. Featuring only a single data pin, the bus used a purely polled approach to computer/mouse communications and survived as the standard on mainstream models (including a number of non-Apple workstations) until 1998 when iMac began the industry-wide switch to using USB. Beginning with the "Bronze Keyboard" PowerBook G3 in May 1999, Apple dropped the external ADB port in favor of USB, but retained an internal ADB connection in the PowerBook G4 for communication with its built-in keyboard and trackpad until early 2005.

Tactile mice

In 2000, Logitech introduced the "tactile mouse", which contained a small actuator that made the mouse vibrate. Such a mouse can augment user-interfaces with haptic feedback, such as giving feedback when crossing a window boundary. To surf by touch requires the user to be able to feel depth or hardness; this ability was realized with the first electrorheological tactile mice[36] but never marketed.

Applications of mice in user-interfaces

Computer-users usually utilize a mouse to control the motion of a cursor in two dimensions in a graphical user interface. Clicking or hovering can select files, programs or actions from a list of names, or (in graphical interfaces) through pictures called "icons" and other elements. For example, a text file might be represented by a picture of a paper notebook, and clicking while the pointer hovers this icon might cause a text editing program to open the file in a window. (See also point-and-click)

Users can also employ mice gesturally; meaning that a stylized motion of the mouse cursor itself, called a "gesture", can issue a command or map to a specific action. For example, in a drawing program, moving the mouse in a rapid "x" motion over a shape might delete the shape.

Gestural interfaces occur more rarely than plain pointing-and-clicking; and people often find them more difficult to use, because they require finer motor-control from the user. However, a few gestural conventions have become widespread, including the drag-and-drop gesture, in which:

  1. The user presses the mouse button while the mouse cursor hovers over an interface object
  2. The user moves the cursor to a different location while holding the button down
  3. The user releases the mouse button

For example, a user might drag-and-drop a picture representing a file onto a picture of a trash-can, thus instructing the system to delete the file.

Other uses of the mouse's input occur commonly in special application-domains. In interactive three-dimensional graphics, the mouse's motion often translates directly into changes in the virtual camera's orientation. For example, in the first-person shooter genre of games (see below), players usually employ the mouse to control the direction in which the virtual player's "head" faces: moving the mouse up will cause the player to look up, revealing the view above the player's head.

When mice have more than one button, software may assign different functions to each button. Often, the primary (leftmost in a right-handed configuration) button on the mouse will select items, and the secondary (rightmost in a right-handed) button will bring up a menu of alternative actions applicable to that item. For example, on platforms with more than one button, the Mozilla web browser will follow a link in response to a primary button click, will bring up a contextual menu of alternative actions for that link in response to a secondary-button click, and will often open the link in a new tab or window in response to a click with the tertiary (middle) mouse button.

No comments: