Norman and Nielsen's critique of gestural interfaces

on

The September-October issue of ACM interactions includes an essay by Don Norman and Jacob Nielsen in which they critique various aspects of gestural interactions as implemented on the new crop of touch-enabled devices: the iPhone, iPad, the Android family, and the like. The gist of their concern is that the design of these interfaces, while incorporating useful and pleasurable interactions based on touch (swipe, pinch, etc.) also introduce a range of usability problems well known to the HCI community.

The article illustrates several classes of defects that violate basic tenets of interface design, including visibility, feedback, consistency, non-destructive operations, discoverability, scalability, and reliability. These design defects stem mostly from lack of standards that unify the experience created by different vendors, and also from the inherent design challenges around gestural interfaces. It’s not even clear how to prioritize the above list, as many of the interactions violate several guidelines at the same time.

In some ways, these design flaws parallel the flaws one encounters in pen-based interfaces that were adapted from mouse-based interfaces. The Microsoft Tablet OS featured many pen-based interactions that were awkward to perform because they assumed on some level that the stylus could be used like a mouse to point and select. But pointing is more difficult with a stylus, and even more so with a finger.

The difficulties lie both in aim (targets have to be much larger than for a mouse) and in intent. Norman and Nielsen give several examples of inadvertent touches that lead to unanticipated results. This is particularly true of document-centric interfaces: I’ve experienced many situations in which talking about a document with another person led naturally to pointing, and then to unintentional touching of the screen on an iPad, resulting in unwanted and unexpected navigation and other actions.

The bottom line is that over the last 30 years we’ve learned — both and designers and as users — a set of conventions for dealing with desktop user interfaces, but these lessons do not necessarily transfer wholesale to more tangible devices. Norman and Nielsen argue that the onus is on the designer to get the interface right, to proceed in incremental steps rather than in large jumps toward a gestural interaction language that fits not only the capabilities of devices, but also–and more importantly–the capabilities and expectations of users.