What is UI
In information technology
, the user interface UI is everything designed into an information device with which a person may interact. This can include display
screens ,
keyboards , a
mouse and the appearance of a
desktop . It is also the way
through which a user interacts with an application
or a website
the means by which the user and a computer system interact, in particular the use of input devices and software.
An interface is a set of commands
or menus
through which a user communicates with a program.
Examples of UI:
1. The computer mouse
Before the mouse, if you wanted to talk to a computer, you had to enter commands through a keyboard.
All that changed in 1964, when engineer and inventor Douglas Engelbart of SRI International pieced together a wooden shell, a circuit board, a couple of metal wheels and some cord to make
interacting with a computer as simple as a point and a click
The remote control
As long ago as 1898, Nikola Tesla demonstrated the worlds first radio-controlled boat, presenting a method for controlling vehicles from a distance.
While Tesla accurately predicted his tele-automation would be used for war, he didnt predict the role the remote control would play in our lives — nor the unbelievable clunky-ness of the
average TV remote.
The search engine
Sir Tim Berners-Lee used to index the World Wide Web — by hand. Of course, as it grew to include millions of links, it became clear users would need a better way.
But while early search engines were embedded in crowded portals full of news stories and links, Google stripped their search page of everything but the search bar and a couple of buttons.
Their user interface helped make universal search the primary function of the web.
9. The ATM
“On Sept. 2nd our bank will open at 9:00 and never close again. – Bank ad announcing the first ATM in 1969.
ATMs gave customers an interface to confirm their identity, interact with the banks records and then withdraw their own cash. They gave banks the ability to serve their customers out-of-hours
– a huge breakthrough in self-service retail.
Electronic Tolling Collection ETC
Slow-downs, long lines, finding the right change, making sure the driver has the right receipt – paying and collecting highway tolls is, at the most basic level, an interface problem.
ETC the use of transponders in cars to pay tolls electronically when a car passes tolling booths dramatically improves the flow of traffic and reduces gas use limiting the need to stop.
8. Predictive text
The smaller the phone, the harder it is to type. This was true of chunky old Nokias and its true of sexy, new iPhones.
Predictive text systems like T9 allowed us to spend less time fumbling and more time communicating. Without them, its hard to imagine that mobile computing gaining the kind of
traction it has.
Speedometer and ipod wheel
Evolution of UI
The user interface evolved with the introduction of the command line interface
, which first appeared as a nearly blank display screen with a line for user input.
Users relied on a keyboard and a set of commands
to navigate exchanges of information with the computer.
Finally, the graphical user interface
GUI arrived, originating mainly in Xeroxs Palo Alto
Research Center , adopted and enhanced by
Apple Computer and effectively standardized by
Microsoft in its Windows operating systems. Elements of a GUI include such things as
windows ,
pull-down menus, buttons, scroll bars and icons
. With the increasing use of multimedia
as part of the GUI, sound, voice, motion video and
virtual reality are increasingly becoming the GUI for
many applications.
The emerging popularity of mobile applications has also affected UI, leading to something called mobile UI
. Mobile UI is specifically concerned with creating usable, interactive interfaces on the smaller screens of
smartphones and
tablets and improving special features, like touch controls.
UI-PAST AND PRESENT
First interface:
developed by Steve Russell in 1962 He developed 1
st
computer game spacewar in which the interaction is only by using keyboard
.
Like rockets in space game, moving rgt and left and firing commands.
Mouse:
40 years ago Douglas Engelbart
introduced The original mouse
, housed in a wooden box twice as high as todays mice and with three buttons on top, moved with the help of two wheels on its underside rather
than a rubber trackball. The wheels—one for the horizontal and another for the vertical—sat at right angles. When the mouse was moved, the vertical wheel rolled
along the surface while the horizontal wheel slid sideways.
The name mouse, originated at the Stanford Research Institute, derives from the resemblance of early models which had a cord attached to the rear part of the
device, suggesting the idea of a tail to the common mouse
Command Line Interface The user provides the input by typing a command string with
the computer keyboard and the system provides output by printing text on the computer monitor .
Command line interfaces are the oldest of the interfaces discussed here. It involves the computer responding to commands typed by the operator. This type of interface has the drawback that it
requires the operator to remember a range of different commands and is not ideal for novice users.
Graphical User Interface
Graphical user interfaces GUI are sometimes also referred to as WIMP because they use
Windows, Icons, Menus and Pointers. Operators use a pointing device such as a mouse, touchpad or trackball to control a pointer on the screen which then
interacts with other on-screen elements. It allows the user to interact with devices through graphical icons and visual indicators such as secondary notations. The term
was created in the 1970s to distinguish graphical interfaces from text-based ones, such as command line interfaces. However, today nearly all digital interfaces are
GUIs. The first commercially available GUI, called PARC, was developed by Xerox. It was used by the Xerox 8010 Information System, which was released in 1981.
After Steve Jobs saw the interface during a tour at Xerox, he had his team at Apple develop an operating system with a similar design. Apples GUI-based OS was
included with the Macintosh, which was released in 1984. Microsoft released their first GUI-based OS, Windows 1.0, in 1985.
Menu Driven
A menu driven interface is commonly used on cash machines also known as automated teller machines, or ATMs, ticket machines and information kiosks for example in a museum. They
provide a simple and easy to use interface comprised of a series of menus and sub-menus which the user accesses by pressing buttons, often on a touch-screen device.preferably if one has
knowledge on UML modeling can be a good example to design architecture of the machine.
Form Based
A type of user interface used, for example, on the internet, to organize questions or options for the user so that they resemble a traditional paper form to be filled out by pointing to the fields
and typing text, or by choosing from a list.
This is a method of enabling you to interact with an application.
Touch screen:
A touchscreen is an input device normally layered on the top of an electronic visual display of an information processing system. A user can give input or control the information processing
system through simple or multi-touch gestures by touching the screen with a special stylus andor one or more fingers.
1970s: Resistive touchscreens are invented. Although capacitive touchscreens were designed first, they were eclipsed in the early years of touch by resistive touchscreens. American inventor
Dr. G. Samuel Hurst developed resistive touchscreens almost accidentally.
Capacitive touchscreen displays rely on the electrical properties of the human body to detect when and where on a display the user touches. Because of this, capacitive displays can be
controlled with very light touches of a finger and generally cannot be used with a mechanical stylus or a gloved hand.
A resistive screen consists of a number of layers. When the screen is pressed, the outer later is pushed onto the next layer — the technology senses that pressure is being applied and registers
input. Resistive touchscreens are versatile as they can be operated with a finger, a fingernail, a stylus or any other object.
Natural language
A natural language interface is a spoken interface where the user interacts with the computer by talking to it. Sometimes referred to as a conversational interface, this interface simulates having
a conversation with a computer. Made famous by science fiction such as in Star Trek
, natural language systems are not yet advanced enough to be in wide-spread use. Commonly used by
telephone systems as an alternative to the user pressing numbered buttons the user can speak their responses instead.
This is the kind of interface used by the popular iPhone
application called Siri
and Cortana
in Windows
.
UI-Others:
Voice Recognition Speech recognition has always struggled to shake off a reputation for being sluggish, awkward,
and, all too often, inaccurate. The technology has only really taken off in specialist areas where a constrained and narrow subset of language is employed or where users are willing to invest the
time needed to train a system to recognize their voice.
This is now changing. As computers become more powerful and parsing algorithms smarter, speech recognition will continue to improve, says Robert Weidmen, VP of marketing for
Nuance ,
the firm that makes Dragon Naturally Speaking. Last year, Google launched a voice search app for the iPhone, allowing users to search without
pressing any buttons. Another iPhone application, called Vlingo
, can be used to control the device in other ways: in addition to searching, a user can dictate text messages and e-mails, or
update his or her status on Facebook with a few simple commands. In the past, the challenge has been adding enough processing power for a cell phone. Now, however, faster data-transfer
speeds mean that it’s possible to use remote servers to seamlessly handle the number crunching required.
Since the ‘Put That There‘ video presentation by Chris Schmandt in 1979, voice recognition has yet to meet with a revolutionary kind of success. The most recent hype over VUI has got to be
Siri, a personal assistant application which is incorporated into Apple’s iOS. It uses a natural language user interface for its voice recognition function to perform tasks exclusively on Apple
devices.
However you also see it as the supporting act in other user interface technologies like Google Glass itself. Glass works basically like a smartphone, only you don’t have to hold it up and
interact with it with your fingers. Instead it clings to you as eyewear and receives your commands via voice control.
The only thing that is lacking now in VUI is the reliability of recognizing what you say. Perfect that and it will be incorporated into user interfaces of the future. At the rate that smartphones
capabilities are expanding and developing now, it’s just a matter of time before VUI takes centre stage as the primary form of human-computer interaction for any computing system.
Augmented Reality An exciting emerging interface is augmented reality, an approach that fuses virtual information
with the real world.
The earliest augmented-reality interfaces required complex and bulky motion-sensing and computer-graphics equipment. More recently, cell phones featuring powerful processing chips
and sensors have to bring the technology within the reach of ordinary users.
Examples of mobile augmented reality include Nokia’s Mobile Augmented Reality Application
MARA and Wikitude
, an application developed for Google’s Android phone operating system. Both allow a user to view the real world through a camera screen with virtual annotations and
tags overlaid on top. With MARA, this virtual data is harvested from the points of interest stored in the NavTeq satellite navigation application. Wikitude, as the name implies, gleans its data
from Wikipedia.
These applications work by monitoring data from an arsenal of sensors: GPS receivers provide precise positioning information, digital compasses determine which way the device is pointing,
and magnetometers or accelerometers calculate its orientation. A project called Nokia Image
Space takes this a step further by allowing people to store experiences–images, video, sounds–in
a particular place so that other people can retrieve them at the same spot. We are already experiencing AR on some of our smartphone apps like Wikitude and
Drodishooting, but they are pretty much at their elementary stages of development. AR is getting the biggest boost in awareness via the upcoming Google’s Project Glass, a pair of wearable
eyeglasses that allows one to see virtual extensions of reality that you can interact with. Here’s an awesome demo of what to expect.
AR can be on anything other than glasses, so long as the
device is able to interact with a real-world environment in real-time. Picture a piece of see- through device which you can hold over objects, buildings and your surroundings to give you
useful information. For example, when you come across a foreign signboard, you can look through the glass device to see them translated for your easy reading.
AR can also make use of your natural environment to create mobile user interfaces where you can interact with by projecting displays onto walls and even your own hands.
Check out how it is done with SixthSense, a prototype of a wearable gestural interface developed by MIT that utilizes AR.
UI-Future:
Gesture Sensing: present Compact magnetometers, accelerometers, and gyroscopes make it possible to track the
movement of a device. Using both Nintendo’s Wii controller and the iPhone, users can control games and applications by physically maneuvering each device through the air. Similarly, it’s
possible to pause and play music on Nokia’s 6600 cell phone simply by tapping the device twice.
New mobile applications are also starting to tap into this trend. Shut Up
, for example, lets Nokia users silence their phone by simply turning it face down. Another app, called
nAlertMe , uses a 3-
D gestural passcode to prevent the device from being stolen. The handset will sound a shrill alarm if the user doesn’t move the device in a predefined pattern in midair to switch it on.
The next step in gesture recognition is to enable computers to better recognize hand and body movements visually. Sony’s Eye showed that simple movements can be recognized relatively
easily. Tracking more complicated 3-D movements in irregular lighting is more difficult, however. Startups, including
Xtr3D , based in Israel, and Soft Kinetic, based in Belgium, are
developing computer vision software that uses infrared for whole-body-sensing gaming applications.
Oblong, a startup based in Los Angeles, has developed a “spatial operating system” that recognizes gestural commands, provided the user wears a pair of special gloves.
1. Gesture Interfaces