Tabletop interaction

on

Ken Hinckley, Koji Yatami, and several other people from MSR published an interesting analysis of how to combine pen and touch input on table-top displays. The work draws inspiration from observations of how people manipulate paper to derive design guidelines for bi-manual and bi-modal interaction. The paper contributes a thorough description of integrated touch and pen-based interaction and offers a thorough analysis of design principles that underlie these kinds of interactions.

While this paper is likely to receive many citations in the future, it is not clear whether the lessons learned from this study of object manipulation truly generalize to other kinds of applications. The limitations of the work, readily acknowledged by the authors, may in fact stem from inherent limitations of table-top displays.

While a large number of such systems are described in recent papers, the range of useful applications seems quite limited. Photo and image editing and manipulation seem to be the mainstay, and a number of papers discuss manipulation and interaction techniques. Other applications (e.g., studying for exams, working with documents, etc.) are probably better served by more conventional displays.

It will be interesting to see if the spatiality of the display will constrain this technology to niche applications. One possible scenario in which these kinds of displays may become useful is when integrated with other conventional displays, so that they are used in an opportunistic, transient manner rather than as dedicated interfaces. In this future, I image table-top displays competing with wall displays both for users attentions and for their hardware budgets.