Blueprint for information seeking evaluation

Friday, July 10th, 2009 by

I dodged being empaneled on a jury, and I made it to IBM Almaden to attend most of NPUC. I did miss the talk by Brad Myers, which I’ll have to view on video, but got to see most of the other presentations and the poster/demo session. One demo I found particularly interesting was Mira Dontcheva‘s Blueprint work. Blueprint is an Eclipse plugin for Flex programming that allows people to search for snippets of code directly from the IDE, and displays them in an overlay or side bar. Blueprint makes it possible to search the web with an interface similar to the typical auto-complete functionality. Furthermore, because it understands Flex syntax, its ranking should more accurate than a regular full-text index that happens to contain code. When you select a search result, Blueprint inserts it into your code, and automatically annotates the code to include the URL at which the snippet was found so that you can re-visit that page later.

Adobe Blueprint screenshot

Adobe Blueprint screenshot

This is a great interface for those of us used to Programming By Google, an interesting example of search that is integrated into a larger task, and is also a great opportunity for information seeking evaluation:  first of all,  you have real people performing real tasks. On top of that, (with suitable instrumentation and opt-in) you can measure how many queries people ran each time they used the search feature, and you can analyze the code that’s being created to see if any of the search results were incorporated either directly (through the built-in paste operation), or indirectly, through reuse of identifiers in the retrieved code. Such an experiment should indicate not only the usability of the approach (compared, for example, with searching in a browser), but also provide evidence that can be used to tweak the ranking algorithm.

The above experiment, coupled with interviews of participants, should be able to answer all of William Hersh’s evaluation questions that I mentioned in an earlier post, particularly the often hard-to assess “Did the system have an impact?” In short, a golden opportunity!

Update: Mira pointed out to me that this work is part of Joel Brandt‘s thesis work.

Tags: . Categories: Information seeking.
Both comments and pings are currently closed.

One Response to “Blueprint for information seeking evaluation”

  1. FXPAL Blog » Blog Archive » Contextualizing IR Says:

    [...] are not necessarily limited to buying goods and services based on strong product metadata. The Blueprint example that Miles mentions uses some additional heuristics on top of full-text search to express [...]