Blog Archive: 2010

pCubee: a interactive cubic display

on Comments (2)

Our friend Takashi Matsumoto, (who built the Post-Bit system with us here at FXPAL) built a cubic display called Z-agon with colleagues at the Keio Media Design Laboratory. Takashi points us at this video of a very nicely realized cubic display (well, five-sided, but still). It’s called pCubee: a Perspective-Corrected Handheld Cubic Display and it comes from the Human Communications Technology Lab at the University of British Columbia. Some of you may have seen a version of this demoed at ACM Multimedia 2009; it will also be at CHI 2010. Longer and more detailed video is here.

Eddi-fying tweet browsing


Michael Bernstein and the usual suspects wrote a nice position paper for the CHI2010 microblogging workshop. They describe Eddi, a system that allows people to group tweets by topic to make sense of large numbers of tweets. In some sense, they are addressing a similar problem to the one that Miles Efron and I tackled in our paper. In both cases, the system uses various sorts of analysis to group and filter tweets to help people understand the collection or the stream.

Continue Reading

Making sense of Twitter search

on Comments (16)

Last week Jeremy and I attended the SSM2010 workshop held in conjunction with WSDM2010. In addition to chairing one of the panels, I got an opportunity to demonstrate an interface that I built to browse Twitter search results, to which Daniel alluded in his summary of the workshop. The system is described in a position paper (co-authored with Miles Efron) that has been accepted to the Microblogging workshop held in conjunction with CHI 2010.

The idea behind this interface is that Twitter displays its search results only by date, thereby making it difficult to understand anything about the result set other than what the last few tweets were. But tweets are structurally rich, including such metadata as the identity of the tweeter, possible threaded conversation, mentioned documents, etc. The system we built is an attempt to explore the possibilities of how to bring HCIR techniques to this task.

Continue Reading

Exploring workplace communication

on Comments (1)

Modern work is a collaborative enterprise. As such, it depends on communication among the collaborators to reach successful outcomes. An increasing number of communication tools are based on somewhat recent computer technologies, such as email, blogs, wikis, social networking, and Twitter.While there have been many studies of single communication tools in the workplace (IM, wikis, blogging, etc.) we believe that we are one of the first to take a broad view of the communication landscape since the introduction of these new technologies.

In our paper, to be presented at CHI 2010, we explored the communication ecology of a small business. We examined the work communication practices of our participants, including what methods people used to communicate and why, how they viewed the various methods and how they adopted them.

Continue Reading

Reintroducing ReBoard

on Comments (2)

ReBoard is a system we built at the lab to automatically capture whiteboard images and make them accessible and sharable through the web. A technical description of the system is available here. At CHI 2010, Stacy Branham will present an evaluation of ReBoad that she conducted over the summer as an intern at FXPAL1.

Until then, check out our dorky demonstration video!

And be sure to watch the other videos of the latest and greatest FXPAL technologies.

1. The paper is
“Let’s go from the whiteboard: Supporting transitions in work through whiteboard capture and reuse” by Stacy Branham, Gene Golovchinksy, Scott Carter, and Jacob Biehl

Does the CHI PC meeting matter?

on Comments (1)

Jofish reports some interesting numbers regarding the role that the associate chairs play in the outcome of CHI paper reviews. He analyzes the CHI 2010 data to reach the following conclusions:

  • Of the 302 submissions accepted, 57 or so were affected by decisions made at the meeting
  • The 1AC (the primary meta-reviewer) was instrumental in getting a paper rejected 31 times, but was not able to prevent rejection 111 times, and represented reviewers’ consensus 1199 times.
  • He also provides some more ammunition for the desk-reject debate.

It would be great to repeat this analysis on other years to see how reliable the patterns are.

An open question is whether the 57 or so papers whose fates were determined at the PC meeting deserved the outcome they received. (Obviously, the rejected papers’ authors would argue against this process.) It’s also interesting to note that it is not possible to replace the CHI PC process with an rule based on average scores, because both the reviewers and the ACs might then try to game the system by assigning extreme scores to marginal papers.

Extremely Unofficial CHI 2010 review survey

on Comments (9)

Yesterday, Lennart Nacke expressed the desire to act on the suggestion in a blog post I wrote to review the reviewers. So why not? I would like to see if we can collect some data to inform the debate about obtaining quality reviews for conferences such as CHI. The goal is to see if the availability of authors’ ratings on reviews of papers can be used to improve the reviewer selection process and to give direct feedback to reviewers as well.

Continue Reading

How to give up on reviewing

on Comments (28)

Angst turns to anger to acceptance (of your lot, if not of your paper). Yes, it’s the CHI 2010 rebuttal period. A short few days to try to address the reviewers’ misreading of your paper before the program committee throws it into the reject pile, requiring you to rewrite it for another conference. While it is easy to find fault with the process that puts one or more person-years’ of work into the hands of “three angry men” who may or may not be in a position to judge the work fairly, it is not clear how to improve the process. James Landay recently wrote about the frustrations of getting systems papers accepted, and in a comment on that post, jofish pointed out that the concerns apply more widely because CHI consists of many small communities with different methodological expectations that are not necessarily appreciated by reviewers.

Continue Reading

Test-driven research


This has been a busy summer for the ReBoard project: Scott Carter, Jake Biehl and I spent a bunch of time building and debugging our code, and  Wunder-intern Stacy ran a great study for us, looking at how people use their office whiteboards before and after we deployed our system. We’ll be blogging more about some of the interesting details in the coming months, but I wanted to touch on a topic that occurred to me as we’re working on the CHI 2010 submission.

Continue Reading