Multitouch and gesture interfaces

What was first a trickle is now a torrent. Apple’s iPhone, Nintendo’s Wii, and Microsoft’s Surface and Windows 7 all confirm that multitouch and gesture interfaces are likely to dominate input hardware soon. They’re often less expensive to engineer and manufacture than button-rich designs, they may be more reliable, and many users delight in their simplicity and naturalness.

They come in roughly three forms, which can be described and examined for their accessibility implications:

Multitouch surfaces. Unlike previous versions, the new models can register more than one touch at a time and can track movement across the surface. For example, a pinching motion of the thumb and index finger can shrink an image or lower the volume. If this motion can be accomplished anywhere on the screen it may accommodate blind users, who cannot use conventional touchscreens because they require touching a specific target area. That’s the big difference: finger movements can have meanings no matter where they are performed, as long as the fingers are in contact with the screen.

Handheld gesture interfaces. These use built-in sensors that detect how the handheld device is moving in three dimensions. For example, you may be able to answer a mobile phone just by picking it up and disconnect by putting it down (ah, a simpler time…), or change menus by shaking it in a particular way. These devices do require a certain amount of dexterity and grip strength, however. (Wiimotes for game playing have occasionally exceeded the grip force exerted by their presumably non-disabled users.)

Free space gesture interfaces. These (mostly prototypes so far) use video cameras to detect physical actions; there is no input hardware at all, just image processing software that decides what gesture the user is performing and issues the corresponding command to the gadget. Not having to be in contact with an active surface or having a set target area may accommodate users with less controlled movements and users with impaired vision.

Clearly, the real accessibility implications of any given product using these input techniques will depend on the angelic and devilish details of the design. For example, an advanced touchscreen may require users to hit a small target with both active fingers, actually raising the barriers for people with vision or dexterity impairments. If there is a correspondence between what’s on the display and what actions users can perform, people with cognitive disabilities may benefit, while those with vision loss may be excluded, other features being equal.

Just as clearly, we’ve only seen the first stages of this transformation of input technologies. Are we saying bye-bye to buttons? Well, when was the last time you bought a product with a knob? Let’s stay alert to the prototypes and see where this ride wants to take us.

This entry was posted in Okay, sometimes it *is* about the technology. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *