How closely are the future of smart phones, tablet computers and other consumer electronics tied to advances in multi-touch sensing technology?
Because so much of the hardware has gotten relatively good, the hardware itself is not a differentiator; it's about design (finally). If you watched companies such as Sony and Samsung grow, they focused first on features and then on industrial design, which made their products look and feel better. Now we're at the point of industrial design at the software level.
In what ways does multi-touch sensing help people better take advantage of advances in consumer electronics?
The consumer devices still aren't using all of the capabilities of multi-touch. Last December, Microsoft got a lot of PC companies to add multi-touch capabilities on their products, in advance of Windows 7. But they shouldn't have launched these devices without ways to take advantage of [the multi-touch interface]. That's exactly how Tablet PCs died.
How does Perception Pixel's multi-touch technology work?
It's an optical technique. In fiber optics, the cable is a light pipe or waveguide, into which you inject light. If a finger presses on the pipe, it disrupts that light within the waveguide. [Perceptive Pixel's] big display screen has a large flat a waveguide. Whenever our screen is touched, this causes a disruption or "frustration" on the waveguide [at a specific location on the display] that is picked up by a sensor. This type of sensor design is particularly high resolution, high fidelity and quite scalable.
How does Perception Pixel's technology compare with what Apple is using in its iPhone, iPod Touch and iPad?
There are a many different ways to do touch sensing. With the iPhone, they are measuring the capacitance between a grid of wires on top of the display. When you put your finger there, it changes the capacitance at an intersection of the wires. It's subtle but quite measurable. This approach is more difficult to scale and doesn't work with an arbitrary stylus or when you're wearing gloves.
How much progress has been made in developing a common standard for multi-touch that device-makers can agree on?
Windows 7 made an attempt to create a standard in how we get the data from the hardware to the software, but more work needs to be done. The broader question is, howdo we have standards for the modalities used as part of multi-touch? Such standards would mean you would be using the same the gestures on every multi-touch device. That's going to be a problem that I don't believe will be solved by an organized movement. It will more likely be solved by the dominance of a particular instance of technology.
CNN used your multi-touch screen extensively during its coverage of the 2008 presidential election. How did your relationship with CNN come about and what are your thoughts on this use of your technology?
I met CNN at a geointelligence trade show. Our work was initially focused on mapping, and CNN made the connection and wanted to use that for their upcoming election coverage more than a year away. I'm extremely proud of CNN for the way they used the technology—as a tool to help them cover the election rather than having the technology be a thing unto itself. [CNN reporter] John King was able to use the display very much like a teacher would in a classroom. One of the great things King ended up doing was working on the display even when he wasn’t on-camera.
What are some multi-touch technology's limitations at this point?
One area is that the display doesn't yet know which contact point is coming from which user. At this point, if you're working on a big screen someone could come up next to you and interfere with the work you're doing. Right now we have to rely on social conventions [think personal space] to avoid this. We want to be able to understand how each person is using the display. The key is understanding more about the touch points than just the actual touch.