- Loading ...
- Inclusive Technologies at CSUN 2013
- New Resource for Accessible Workplace ICT
- Inclusive Technologies Joins New Project on Cloud-based Accessibility
- Department of Labor Office on Disability Employment Policy Webcast on Emerging Technologies
- New Spider-Man movie fails to break “disavillainy” connection
Category Archives: Okay, sometimes it *is* about the technology
HealthSpot is preparing to offer large, enclosed health care kiosks in public locations so people can access their records, their physicians, and other medical services from anywhere. What a great idea! These virtual clinics could be in pharmacies, airports, college campuses, rural libraries, even in developing countries. We’re hoping that enough attention is being paid to usability and accessibility in the design, as this product sits in a market and policy hotspot — there’s lots of attention right now on making both kiosks and health IT as inclusive as possible.
The American Foundation for the Blind’s (AFB) AccessWorld has published the results of a new study on barriers encountered by people who are blind or have low vision when they use everyday household and electronic devices. The results include an all-too-familiar list: no speech output, no physical landmarks, and unreadable printed and electronic text. AFB wonders if touchscreens and small displays are actually making these products less accessible, and it’s hard to argue against that.
AFB surveyed 2 groups of people: a random sample of American households, screened for vision loss; and a sample of people who had previous contact with AFB. Read the article to see the interesting ways in which these 2 groups differed in demographics and in technology use.
Google just upgraded its Accessible View experimental search version. The new one shows up in large print, with sound alerts, keyboard navigation, and screen reader compatibility.
A British ICT designer has come up with an interesting blend of technology and elder market awareness. “jive” has a router that’s configured at the point of sale, and a mouseless interface for information retrieval and communication. Each of your friends or family members is represented by a little plastic square, and you just place the square on the screen to see what that person is up to, or to send a message. When you’re not using it, the screen automatically updates you with their doings and whereabouts. This simple, tangible interface may point the way to more inclusive ICT designs.
What was first a trickle is now a torrent. Apple’s iPhone, Nintendo’s Wii, and Microsoft’s Surface and Windows 7 all confirm that multitouch and gesture interfaces are likely to dominate input hardware soon. They’re often less expensive to engineer and manufacture than button-rich designs, they may be more reliable, and many users delight in their simplicity and naturalness.
They come in roughly three forms, which can be described and examined for their accessibility implications:
Multitouch surfaces. Unlike previous versions, the new models can register more than one touch at a time and can track movement across the surface. For example, a pinching motion of the thumb and index finger can shrink an image or lower the volume. If this motion can be accomplished anywhere on the screen it may accommodate blind users, who cannot use conventional touchscreens because they require touching a specific target area. That’s the big difference: finger movements can have meanings no matter where they are performed, as long as the fingers are in contact with the screen.
Handheld gesture interfaces. These use built-in sensors that detect how the handheld device is moving in three dimensions. For example, you may be able to answer a mobile phone just by picking it up and disconnect by putting it down (ah, a simpler time…), or change menus by shaking it in a particular way. These devices do require a certain amount of dexterity and grip strength, however. (Wiimotes for game playing have occasionally exceeded the grip force exerted by their presumably non-disabled users.)
Free space gesture interfaces. These (mostly prototypes so far) use video cameras to detect physical actions; there is no input hardware at all, just image processing software that decides what gesture the user is performing and issues the corresponding command to the gadget. Not having to be in contact with an active surface or having a set target area may accommodate users with less controlled movements and users with impaired vision.
Clearly, the real accessibility implications of any given product using these input techniques will depend on the angelic and devilish details of the design. For example, an advanced touchscreen may require users to hit a small target with both active fingers, actually raising the barriers for people with vision or dexterity impairments. If there is a correspondence between what’s on the display and what actions users can perform, people with cognitive disabilities may benefit, while those with vision loss may be excluded, other features being equal.
Just as clearly, we’ve only seen the first stages of this transformation of input technologies. Are we saying bye-bye to buttons? Well, when was the last time you bought a product with a knob? Let’s stay alert to the prototypes and see where this ride wants to take us.
A new prototype device consists of a system of braces, actuators, and controllers that let the strapped-in user stand, walk, and climb stairs. As usual, though, commercialization is going to be a challenge, as will be funding and therapeutic support. Too often, good engineering is the easy part.