Skip to main content
Part of complete coverage on

New tech moves beyond the mouse, keyboard and screen

John D. Sutter
  • Microsoft Kinect, a gaming system without remotes, debuts this week
  • The system, on sale in November, may help usher in a new era of computing
  • Researchers are trying to move us beyond the mouse, keyboard and screen
  • New technologies read gestures, listen to voices and track eye movements

(CNN) -- Goodbye computer mouse, keyboard and monitor.

Say hello to a new, simpler era of human-computer interaction -- this time, with no clunky hardware standing between you and digital information.

In this new world, there are options aplenty.

Instead of sliding a mouse across your desk, you could just point at whatever you'd like to select. Instead of pecking away at a keyboard, you could just say what you're thinking. And instead of glaring at a big screen all day, why not just project that information on the surface of your contact lenses?

None of this is science fiction. These ideas are here today, some of them in research labs and others already on store shelves.

Video: No joystick required for Kinect
Video: Xbox Kinect lets you 'tickle' tigers
Video: New Xbox debuts at E3 Expo

And, thanks to a remote-control-free video gaming system called Kinect, these futuristic concepts for computer-human communication are about to get a lot more popular, technology researchers said in interviews this week.

Microsoft's Kinect, which hits stores November 4, lets players control games by moving their bodies. To make a digital soccer player kick, you just swing your leg.

It's an effort to make gaming more "natural." And that concept -- that we don't need intermediaries to help us talk to technology -- is likely to bleed into every aspect of electronics and computing in coming years.

"It's all fantastic, because it's a really useful educational opportunity for the world," said John Underkoffler, creator of a real gesture-based computing system that was featured in the 2002 movie "Minority Report."

"It's only been a few years that people have started to realize, 'Wait a minute! We're not stuck with the mouse and Windows-driven interface for the rest of time.' "

'Natural user interfaces'

A whole field of technological research has developed around the idea of "natural user interfaces," which try to let people communicate with machines in the same ways they would interact with other people and with the real world.

Kinect, which was demonstrated at a video gaming conference this week in Los Angeles, California, is a prime example of this, because people control the system with body gestures and by talking instead of clicking buttons or messing with joysticks.

Researchers are trying to expand this idea of "gesture-controlled" electronics into computing more generally.

Underkoffler, for example, developed a system called g-speak, which lets users shuffle through data sets and other information by waving their hands.

He says several large companies, including Boeing, already are using custom-built versions of the system, which range in price from $100,000 to millions of dollars.

Underkoffler expects consumer-level products to be widely available within five years.

History of 'natural' computing

These developments may seem to have plopped into reality out of sci-fi. But they've been a long time coming.

Touch-sensitive screens were some of the first natural interfaces.

They've been in research for decades, but they didn't become cheap and popular until 2007, when Apple released the touch-screen iPhone and Microsoft showed off a touch-screen coffee table called Microsoft Surface.

Now, as computer hardware becomes cheaper and people get more used to the idea that the mouse and keyboard aren't the only way to compute, researchers are pushing into areas like brain-controlled computing, eye-tracking software and voice-recognition technology, which is common on smartphones.

Bill Buxton, principal researcher at Microsoft Research, said that new ways for people to interact with computers have to be radically different to catch on.

People are used to touch screens and video cameras now, he said, so the transition into gesture computing makes more sense.

"The trend [of gesture computing] has been around for a while, but it's sort of hit a critical point where I think the game is changing," he said.

"The most significant thing that's changed about computing is who's doing what, where, with whom and for how much."

When simple is complicated

Despite the recent advances, a number of hurdles remain in the "natural" progression of electronics.

New methods of input sometimes come with new problems. Using arm and hand motions to control computers, for instance, can become tiring, said Beth Mynatt, director of the GVU center at the Georgia Institute of Technology. And if such motions are taken to TV sets, as Toshiba has demonstrated, then there may be some unintended and hilarious consequences, she said.

Imagine changing a channel by waving your arms.

"Are they trying to change the channel or are they making rude gestures to the umpire?" a computer might think, she said. "[The computer is] going to get it wrong and nobody's going to want to do it. They're going to be much happier fumbling around with that remote."

Robert Wang, a PhD student at MIT who has developed a gesture-controlled computing system, said it's also difficult to use hand movements to manipulate digital objects because you can't feel them.

"It's going to be a little bit difficult to make a compelling sense of touch," he said. Good visual cues may have to suffice, he said.

Death of the mouse?

There's disagreement in the tech community about whether these new methods of human-computer interaction will completely kill the mouse, keyboard and computer monitor -- or if they'll just offer alternatives.

Generally, researchers think the mouse might be the first to go.

The keyboard, however unnatural, likely will be around longer because it is such an efficient way to write, and because people don't want to learn new systems, said Mynatt of Georgia Tech.

Buxton, from Microsoft Research, said these new options aren't competing with each other because they're all good at something and terrible for something else.

Using Kinect on an airplane would be "completely absurd," for example, he said, because you'd have to stand up on your plane seat and flail your arms around. Likewise, typing in a car is unsafe, and talking about private matters in public -- or even entering voice commands -- can be problematic.

"What I see is not that the gesture stuff is in competition with the mouse or with multitouch," he said. "What all of these things do is they're enhancing the palate of colors or the resources we can draw on, so that when we have something to do that involves technology, we can use the most appropriate means."

Screens may be the last hangovers of the desktop world.

Some researchers now are projecting the internet and information on walls and even onto peoples' hands, in effect turning fingers into buttons of their own.

Pranav Mistry, a research assistant in the MIT Media Lab, said his goal is to get rid of computer hardware entirely -- so that people just interact directly with information.

"The hardware is becoming invisible," he said.

Ultimately, he said, the digital world will fold completely into the real one.


Most popular Tech stories right now