ADVERTISEMENT
latest stories:
This article is from the In-Depth Report Personal Technology in 2011

Notions of Motion: Hackers Harness Microsoft's Kinect for Business and Pleasure Applications

Gamers and hackers could control the office as well as games with Microsoft's Kinect
Microsoft,gaming,work



SCIENTIFIC AMERICAN/LARRY GREENEMEIER

When Microsoft's Kinect for Xbox 360 debuted in November, it offered a revolutionary way to interact with gaming systems, using only bodily motion as the controller. Already a success in the home—Microsoft says it has sold eight million Kinect sensors so far—controllerless computer interfaces could soon move beyond play to help out in the work place, for example, enabling manipulation of digital files using only gestures à la the film Minority Report. Researchers suggest motion controlled computing might one day help make office drudgery as enjoyable as dancing and sports or as relaxing as yoga and tai chi.

Kinect is a motion-sensing Webcamlike add-on for the Xbox 360 game console that uses an infrared scanner to create 3-D models of people as they move. This allows people to play games by, for example, moving their arms in a swimming motion, shimmying their bodies or other so-called natural interactions.

The Kinect has quickly drawn the attention not just of gamers but of hackers as well. Programmers developed code to tap into the raw data from the Kinect less than a week after the device came out. In doing so, they have created a thriving community testing the limits of what it could really do, such as helping mobile robots respond to gestural commands and creating interactive public art exhibits. These hackers have Microsoft's blessing. "We are perfectly comfortable with hobbyists taking advantage of that raw data to explore the exciting possibilities of Kinect for Xbox 360 for themselves," says Alex Kipman, Microsoft's director of incubation for the Xbox 360.

These efforts are developing ways for controlling personal computers with the Kinect. A group at the Massachusetts Institute of Technology (M.I.T.) developed the Depthjs hack to surf the Web with the device, whereas hacks for gesture-based control of Windows 7 applications as well as PowerPoint and pdf presentations came out from Evoluce, a multi-touch screen technology company in Germany. Microsoft has also enabled a way to videoconference through Kinect with VideoKinect, taking advantage of the camera and microphone also within the device.

Now PrimeSense, the Israel-based company that makes the 3-D sensing technology inside Kinect, is teaming up Taiwan-based computer-maker ASUS for a device called the WAVI Xtion to control PCs much as the Kinect does. The new device, which debuted at the 2011 Consumer Electronics Show (CES) in Las Vegas, is scheduled to be commercially available later this year.

These advances in controllerless computing might venture far outside the home and office. "Our general vision is natural interaction everywhere," says PrimeSense vice president of marketing and business development, Adi Berenson. "This means that you should expect to see from us solutions for portability, mobility, handhelds, robotics, automotive and more."

The science of gestures
How might Kinect and related technologies improve how we interact with computers?

"If you can imagine the relief you might feel throwing files into the trash with the Kinect or the interest you feel in lovingly arranging things, you can see how gestures can have an impact on you," says computer and social scientist Katherine Isbister at Polytechnic Institute of New York University in Brooklyn. "Being able to use more of our physical expressivity could be great."

Isbister and her colleagues are investigating how specific movements trigger certain feelings and thus learn how gesture-based devices such as the Kinect and the Nintendo's Wii can essentially use your body to hack into your brain. A better understanding of what motions trigger which emotions could help make gesture-based computer interfaces more enjoyable.

"You might not want to make flat-palm gestures for instance, as if you were touching a screen—that might not be especially ergonomic and cause strain if you take that pose for too long," Isbister says. "You might want something more like the curved wrists and circular, flowing movements you see in tai chi. We want to develop a vocabulary that really takes advantage of the technology."

The broad, sweeping motions used with the Kinect and similar technologies would likely not be suited for anything requiring subtle motions such as typing or any long-term activity that can tire your arms if you keep them raised too long. "The keyboard and mouse is much better for long-term, fine-grained work," says M.I.T. Media Lab computer scientist Aaron Zinman, who worked on the Depthjs hack of Kinect to surf the Web. The Kinect also needs a lot of space, requiring that users stand about two to 2.5 meters away from screens, Isbister notes, which could also lead to privacy issues at the office.

"It is best not to think about traditional typing and mouse applications being directly controlled by the Kinect, but rather to think of new possibilities for which the keyboard and mouse were not well suited," Zinman adds. "When we think of group collaboration or pulling in real-time physical world objects into a digital setting—architectural models—these are examples where we want digital tools to aid us that go beyond a single person sitting at a computer."

In the office one might imagine using gestures to sort and rummage through mountains of files "like you would organize work spaces in your house," Isbister says. "It could prove useful for all those endless tabs you have in Web browsers, or how you can have lots of windows on your screen that are very confusing and make you lose your sense of place in the space you are arranging for yourself." These systems could also work together with methods for visualizing data to help users hunt through arcane databases for vital details or interesting trends.

The gesture-technology frontier
The office is not the only place where the Kinect and related technologies might go. At CES, Oslo-based Elliptic Labs debuted a Kinect-like touchless interface for iPads and other tablet computers that uses ultrasound to scan people's movements, whereas a gesture-based way to remote control televisions came out from Brussels-based joint venture Softkinetic–Optrima partnered with Rotterdam-based Metrological Media Innovations.

Microsoft's Kipman says the company's focus with the Kinect is on games and entertainment. Still, "it's easy to imagine the many ways Kinect could be used," he notes. "Microsoft has deep investments in natural user interface. It is part of the company's long-term strategy."

If gestures do become a regular part of everyday computing, "it's hard to anticipate what the secondary effects might be until after you see this technology deploy to a million or more people," Isbister says. "When people first started talking on their phones through headsets and earpieces, it looked like you saw all these people talking to themselves. It could be with gesture technology, the guy flailing around on the street is just talking on his cell phone."

Rights & Permissions
Share this Article:

Comments

You must sign in or register as a ScientificAmerican.com member to submit a comment.
Scientific American Holiday Sale

Give a Gift &
Get a Gift - Free!

Give a 1 year subscription as low as $14.99

Subscribe Now! >

X

Email this Article

X