Not having gone to SIGIR 2010, I missed Gary Flake’s keynote address, in which he described and demonstrated Microsoft Pivot, a zoomable, faceted search interface that his group built. Jeff Dalton has a good summary of the talk, which parallels Gary’s previous presentations, including a TED talk (video below). The demos are pretty slick, and the scale at which the system operates is impressive.

In some ways, his emphasis on rich clients and interactive control over large, pre-computed datasets, is a great illustration of HCIR principles. The user is encouraged to explore by making fluid, immediate, reversible operations over large data sets with the goal of finding useful information.

He points to server-side issues (ranking, faceting, cleaning the data) as the biggest challenge to this kind of interaction.

With respect to interaction, he uses the Uncanny Valley analogy. The term “Uncanny Valley” refers to a sense of revulsion that people have of robots who come close to imitating the appearance and behavior of humans. The analogy is that dynamic user interfaces tend to descend into glitzy animation that detracts from the actual tasks. By focusing on the data, Pivot strives to avoid this fate.

It seems to me that another major challenge is inherent in this style of interaction. The dynamic, visual displays characteristic of Pivot seem to work best with visual data that can be understood at a glance; the strength of the interface is to provide powerful and intuitive filtering and pivoting to arrive at juxtapositions of these images that lead to insight.

It’s not at all clear to me, however, how one would use this effectively for large collections of textual documents, particularly in situations with ill-defined information needs. While some exploratory search tasks may be handled well by this sort of interaction, tasks that require immersion in the text of documents to achieve deeper understanding of the information need and of the collection may not lend themselves well to this kind of image-biased visualizations.

Having made a great start at solving some of the hard data management issues, it would be great to see the team explore some harder application problems around HCIR. Success there would truly cross the  Uncanny Valley.

Share on: 

6 Comments

  1. […] This post was mentioned on Twitter by Xavier Amatriain, Gene Golovchinsky. Gene Golovchinsky said: Posted “Pivot” https://palblog.fxpal.com/?p=4278 #sigir10 […]

  2. I basically agree with you. My own thoughts about the keynote here: http://thenoisychannel.com/2010/07/21/sigir-2010-day-1-keynote/

  3. […] from:  FXPAL Blog » Blog Archive » Pivot By pivot | category: pivot | tags: dalton, faceted-search, flake, group, having-gone, […]

  4. I would love to see an account of the Q&A. I think a fundamental question for this kind of research is what are the limits of abstraction for conveying information in the absence of a strong schema (i.e., when operating in Belkin’s Anomalous State of Knowledge)? How much can we do with slick UIs to substitute for reading text? And what are the associated tradeoffs in terms of comprehension, depth of understanding, and serendipitous discovery?

  5. Yes, image-based exploratory search would be really hard to do well over purely textual collections. But with good metadata (ideally about the contents), there’s hope.

    That metadata doesn’t itself need to be visual–or easily understood by itself.
    For instance, if there are a sizeable number of known data elements, Chernoff faces are one way to improve understanding. Berg is using this concept for Schooloscope. Schooloscope would be a good good project to look at (and maybe even formally study) regarding UI’s and comprehension, and what is lost/gained in various presentations of the info.

  6. I bet that it’s possible to train someone to perform well when making sense of visualizations of multi-dimensional data; it remains to be seen how much effort and motivation is required to achieve that training. I am sure people have studied effects of learning on ability to perform sensemaking on complex visualizations, but I wonder how generalizable these kinds of results are.

Comments are closed.