ADVERTISEMENT
latest stories:
This article is from the In-Depth Report Consumer Electronics: More Than Just Fun and Games

Magic Fingers: Digging Into Multi-Touch Technology with Both Hands

Perceptive Pixel chief scientist Jefferson Han has big plans for changing how people use computers
multi-touch, Han



© JURVETSON, VIA FLICKR

More In This Article

At Perceptive Pixel's offices on Manhattan's West Side, Jefferson Han stands in front of a megasize multi-touch screen and runs his fingertips across the display. Each finger leaves a trail of colored pixels in its wake, causing the display to look, briefly, like it has been scratched by a set of digital claws.

Han, who founded Perceptive Pixel in 2006 and serves as the company's chief scientist, next uses his index finger to draw a loop on the 100-inch display. It causes a menu of options to appear on the screen, not all that different from using a mouse to click on a drop-down menu. Of course, Han doesn't need a mouse, or a keyboard for that matter. He selects one of his menu options, which are arranged in a loop resembling the one he drew to pull up the menu, and away he goes.

Perceptive Pixel's technology is best known for the role it played in helping cable news network CNN create its "Magic Wall," a dynamic, on-air graphical representation of voting results during the network's coverage of the 2008 presidential election. Although Han acknowledges that multi-touch is "interesting" at the level of gadgets such as Apple's iPhone, the technology comes into its own only when you're able to manipulate objects on a display screen using both hands.

Creating software to make multi-touch work properly is the hardest part of developing these systems. "The interface is such a paradigm shift," he says. "You can't just bolt the software onto existing hardware." For this reason, Han and his colleagues have essentially created new operating system for touch-sensitive technology that recognizes an infinite number of touch points—ideally, allowing multiple users to manipulate the same display simultaneously.

We sat down with Han to talk about his approach to multi-touch, how it differs from offerings by consumer electronic device-makers, and where the technology is headed.

[An edited transcript of the interview follows.]

What prompted you to start researching and developing multi-touch sensing technology?
The user's interface with the computer, not the back-end processing horsepower, has become the bottleneck to the user, who accesses information via a straw, in the form of a mouse and keyboard. The input needs more attention, but it's very difficult to impose a new interface on people. Still, as you see computing in more places people realize you can ask for more than just the keyboard. As much as I hate [the movie] Minority Report, it had some impact on people realizing that there's more to computing that simply doing one thing at a time, you can bring your other hand around and use that too. One way to approach a new interface is to ask, "What would a child do with this?"
 
How does your multi-touch work build on your previous research in the areas of computer graphics, machine learning, real-time computer vision and human-computer interfaces?
My interest is in the visual, that's really why I got into programming and in particular, computer graphics. Graphics has lately made a great shift towards machine learning, which itself is about understanding data. That's where the computer vision work came from. Computer vision is the inverse problem of computer graphics. Instead of telling the computer what the state of the world is, the computer has to figure it out [using e.g. visual cues]. How do you get computers to understand a scene? How do you clean up the sensor information [that the computers are receiving]? Doing this involves using strong engineering and math components.

How closely are the future of smart phones, tablet computers and other consumer electronics tied to advances in multi-touch sensing technology?
Because so much of the hardware has gotten relatively good, the hardware itself is not a differentiator; it's about design (finally). If you watched companies such as Sony and Samsung grow, they focused first on features and then on industrial design, which made their products look and feel better. Now we're at the point of industrial design at the software level.
 
In what ways does multi-touch sensing help people better take advantage of advances in consumer electronics?
The consumer devices still aren't using all of the capabilities of multi-touch. Last December, Microsoft got a lot of PC companies to add multi-touch capabilities on their products, in advance of Windows 7. But they shouldn't have launched these devices without ways to take advantage of [the multi-touch interface]. That's exactly how Tablet PCs died.
 
How does Perception Pixel's multi-touch technology work?
It's an optical technique. In fiber optics, the cable is a light pipe or waveguide, into which  you inject light. If a finger presses on the pipe, it disrupts that light within the waveguide. [Perceptive Pixel's] big display screen has a large flat a waveguide. Whenever our screen is touched, this causes a disruption or "frustration" on the waveguide [at a specific location on the display] that is picked up by a sensor. This type of sensor design is particularly high resolution, high fidelity and quite scalable.
 
How does Perception Pixel's technology compare with what Apple is using in its iPhone, iPod Touch and iPad?
There are a many different ways to do touch sensing. With the iPhone, they are measuring the capacitance between a grid of wires on top of the display. When you put your finger there, it changes the capacitance at an intersection of the wires. It's subtle but quite measurable. This approach is more difficult to scale and doesn't work with an arbitrary stylus or when you're wearing gloves.
 
How much progress has been made in developing a common standard for multi-touch that device-makers can agree on?
Windows 7 made an attempt to create a standard in how we get the data from the hardware to the software, but more work needs to be done. The broader question is, howdo we have standards for the modalities used as part of multi-touch? Such standards would mean you would be using the same the gestures on every multi-touch device. That's going to be a problem that I don't believe will be solved by an organized movement. It will more likely be solved by the dominance of a particular instance of technology.
 
CNN used your multi-touch screen extensively during its coverage of the 2008 presidential election. How did your relationship with CNN come about and what are your thoughts on this use of your technology?
I met CNN at a geointelligence trade show. Our work was initially focused on mapping, and CNN made the connection and wanted to use that for their upcoming election coverage more than a year away. I'm extremely proud of CNN for the way they used the technology—as a tool to help them cover the election rather than having the technology be a thing unto itself. [CNN reporter] John King was able to use the display very much like a teacher would in a classroom. One of the great things King ended up doing was working on the display even when he wasn’t on-camera.

What are some multi-touch technology's limitations at this point?
One area is that the display doesn't yet know which contact point is coming from which user. At this point, if you're working on a big screen someone could come up next to you and interfere with the work you're doing. Right now we have to rely on social conventions [think personal space] to avoid this. We want to be able to understand how each person is using the display. The key is understanding more about the touch points than just the actual touch.

Rights & Permissions
Share this Article:

Comments

You must sign in or register as a ScientificAmerican.com member to submit a comment.
Scientific American Holiday Sale

Limited Time Only!

Get 50% off Digital Gifts

Hurry sale ends 12/31 >

X

Email this Article

X