There are generally two types of tags for linking digital content and paper documents. Marker-based tags and RFIDs employ a modification of the printed document. Content-based solutions remove the physical tag entirely and link using features of the existing printed matter. Chunyuan, Laurent, Gene, Qiong, and I recently published a paper in IEEE Pervasive Computing magazine that explores the two tag types’ use and design trade-offs by comparing our experiences developing and evaluating two systems that use marker-based tagging — DynamInk and PapierCraft — with two systems that utilize content-based tagging — Pacer and ReBoard. In the paper, we situate these four systems in the design space of interactive paper systems and discuss lessons we learned from creating and deploying each technology.
Blog Author: Scott Carter
For an article I’m writing for a well-known magazine I needed to get my hands on one of the new iPads for a few moments, pre-release. I went bottom-up, top-down, pretended to be a reporter, employed vague threats, etc. All to no avail. I suppose the powers-that-be have a good reason for this, but it is a mystery to me. I mean at this point, the cat is out of the bag! On the other hand, I’m not really in the target market (like these guys, I find Apple’s mobile devices far too restrictive — my particular pet peeve is having to subvert the OS just to mount as a drive). So maybe I’m not meant to understand.
Update: This intern slot has been filled.
This is one in a series of posts advertising internship positions at FXPAL for the summer of 2010. A listing of all internship positions currently posted is available here.
Making a decision can be difficult. From choosing the right camera to
finding a place to live, people are faced with a dizzying array of
choices on one hand and commentary (in the form of blog posts, reviews, etc.) about their different options on the other. But little scaffolding connects the two. We are interested in how to make those connections in order to help people make decisions using innovative data mining and search techniques integrated with rich, interactive visualizations.
Specifically, this project will involve building a data mining system
capable of extracting useful summaries and metadata from consumer
reviews, and a walk-up-and-use visual interface that makes use of these data to help users browse collections. It is expected that the intern will be responsible for a subset of the system that is tailored to their interests. The intern will be expected to contribute to a paper suitable for IUI or a similar conference that describes the system and their experience designing it.
Prospective candidates should be enrolled in a PhD program and should have some experience with data mining and GUI design. Experience with information retrieval is a plus. Please contact Scott Carter if you are interested in this position. For more information on the FXPAL internship program, please visit our web site.
ReBoard is a system we built at the lab to automatically capture whiteboard images and make them accessible and sharable through the web. A technical description of the system is available here. At CHI 2010, Stacy Branham will present an evaluation of ReBoad that she conducted over the summer as an intern at FXPAL1.
Until then, check out our dorky demonstration video!
And be sure to watch the other videos of the latest and greatest FXPAL technologies.
1. The paper is
“Let’s go from the whiteboard: Supporting transitions in work through whiteboard capture and reuse” by Stacy Branham, Gene Golovchinksy, Scott Carter, and Jacob Biehl.
Developers have built applications for mobile phones to support a wide swath of activities, but I would argue that there is no better use for a mobile phone than for those tasks that are fundamentally mobile. And what is more mobile than running? While there have been a variety of research projects (such as UbiFit) designed to encourage exercise, I am more interested here in those applications that support folks who’ve already bought in. For us, smart phones that make it easy to track pace, distance, and even elevation (such as RunKeeper, SportsTracker, and MotionXGPS) have been killer apps. Research projects (such as TripleBeat) are also exploring how to increase competition using past personal results as well as results from other users. Other work has explored using shared audio spaces to allow runners to compete over distances.
How else might we use mobile technologies to improve the running experience? Continue Reading
Laurent and I recently published an article (SeeReader: An (Almost) Eyes-Free Mobile Rich Document Viewer) in the special issue on Pervasive Computing in the International Journal of Computer Science Issues (IJCSI). The IJCSI is open-access, meaning that the content is not hidden behind a paywall. Open-access journals are still seen as dubious by many, and perhaps rightly so. These journals are universally new and tend to enjoy less prestige and quality than mainstream journals. In return, though, they offer fast turn-around times and wide indexing.
While I’ve many present and past fabulous female colleagues, if I’m to choose one to write about it’s a no-brainer.
Jennifer Mankoff is an associate processor at the Human Computer Interaction Institute (HCII) at Carnegie Mellon University. Jen was my graduate advisor at Berkeley, seeing me through a master’s and PhD. Perhaps “nurse” is a better word, as she not only worked tirelessly with me to improve my abilities but at times literally cared for me when I was ill.
Jen is a whirling dervish. A good Samaritan. A force of nature.
Maribeth and Scott are collaborating with Saadi Lahlou on a special issue of the International Journal of Web Based Communities. This special issue grows out of a series of workshops at the UbiComp and CSCW conferences for a community of people interested in technologies that support meetings as well as how those technologies are changing our notion of what a meeting is. We see a deep interest in this community in understanding, augmenting, and challenging meeting practices and group collaborations across a variety of media spaces and contexts.
Please see the CFP web page for more information.