Paper still plays an important role in many tasks even in this age of computers. This phenomenon can be attributed to paper’s unique advantages in display quality, spatial arrangement flexibility, instant accessibility and robustness, which the existing computers can hardly beat. However, paper lacks computational capability and does not render dynamic information. In contrast, cell phones are becoming powerful in computation and communication, providing a convenient access to dynamic information and digital services. Nevertheless, cell phones are constrained by their limited screen size, relatively lower display quality and cumbersome input methods. Combining the merits of paper and cell phones for rich GUI-like interactions on paper has become an active research area.
Here at FXPAL, the Paper UI group currently focuses on cell phone-based interfaces and their supporting techniques to link paper documents to digital information and enable rich digital interactions on physical paper through content-based image recognition algorithms. We started research in this area several years ago (see our project page for more details), and our recent on-going projects include EMM and PACER.
EMMs (Embedded Media Markers)
EMMs are nearly transparent iconic marks printed on paper documents that signify the existence of media associated with that part of the document. EMMs also guide users’ camera operations for media retrieval. Users take a picture of an EMM-signified document patch using a cell phone, and the media associated with the EMM-signified document location is displayed on the phone. Unlike bar codes, EMMs are nearly transparent and thus do not interfere with the document appearance. Retrieval of media associated with an EMM is based on image local features of the captured EMM-signified document patch. Our recent paper on EMMs won the Best Paper Award at the ACM Conference on Intelligent User Interface (IUI 2010).
Here are the slides from that presentation
PACER (Paper And Cell phone for Editing and Reading)
In the PACER, we propose a novel augmented-reality-like cell phone interface that is optimized for fine-grained paper document interaction. In contrast to the existing limited point-and-click interaction with pre-defined hotspots on paper, PACER allows users to issue rich hybrid camera-touch gestures to select arbitrary paper document details (e.g. individual Latin words, Chinese/Japanese characters, math symbols, places on a map and any image regions), and to specify actions to be applied to the selected content. The initial results of PACER have been published at CHI 2010.
Here are the slides