Andy Wilson of Microsoft Research uses a simple webcam, positioned well above the keyboard, to identify special hand gestures for controlling software. He’s got some great demo videos at the link below. For example, just putting your thumb and forefinger together allows you to move the current active window around the screen; two hands moving apart or together zoom in and out on an image. The great thing about this approach is that it requires no contact with the user, reducing the accuracy requirement from 3 dimensions to 2. Anything that reduces accuracy requirements is great for people with impaired dexterity; some gestures could be intuitive enough to support users with cognitive disabilities. And if the gestures can be performed anywhere in space, blind users would not be excluded by the strict target areas of touchscreens.
Also, webcams may be less expensive and technically demanding to implement than touchy touch surfaces.
By the way, these videos are hosted on Johnny Chung Lee’s site; Lee has done some amazing gesture interface work with Wii and other technologies.