Yesterday Eleanor posted a great example of a difficult exploratory search. The goal was to answer a question, but not only was it difficult to figure out how to articulate the search effectively, but also it was not clear whether the answer even exists. The difficulty of articulation stems from the fact that even in combination, the terms that Eleanor used to characterize the information need retrieved documents that were similar to the desired information, but were lacking some key aspect.
This is not the kind of difficulty that was characteristic of craft-related searches: those information needs were difficult to articulate in part because the vocabulary was poorly-defined and often not immediately obvious to the searcher. In Eleanor’s case, the vocabulary is (relatively) well defined, and could be refined further if needed.
It is also interesting that several people actually tried to search, and found partial but incomplete answers. Eleanor, of course, had also looked, and (presumably) also did not find what she was looking for. I don’t know how many people tried to solve the challenge through search, but at least three posted potential solutions in comments, and the post got a good number of hits during the day.
Might we have been more effective at answering the question had our search interface supported collaboration? Perhaps even a tool like SearchTogether might have allowed Eleanor to seed the information seeking process sufficiently to allow Daniel and me to dismiss the Wikipedia article we had suggested? Could Francine have branched out faster based on information in these earlier failed searches? Could a deeper-mediated search uncover useful documents that were retrieved but not examined because of their lower rank?
This seems like an interesting example of a difficult search task that might be usefully investigated to understand how to help people refine queries, search broadly, leverage others’ results, and perhaps decide with some confidence that the answer cannot be found. All in all, a much richer set of questions than those that arise out of tuning a ranked list to maximize precision in the top ten documents.