It’s that time of the year again, time to solicit your latest and greatest HCIR ideas in written and poster form. We are happy to announce that this year’s Human-Computer Information Retrieval Symposium (HCIR 2013) will be held on October 3 and 4 in Vancouver, BC. Building on last year’s meeting, we will have both short and full papers, as well as plenty of opportunity for discussion and interaction. Short papers will be presented at the poster session, while full papers will be peer-reviewed to first-tier conference standards, will get an regular oral presentation slot and will be archived in the ACM Digital Library, as were last year’s papers. The deadline for submission is June 30th. For more details, please see the CFP.
Posts Tagged ‘HCIR’
Over the past six years of the HCIR series of meetings, we’ve accumulated a number of publications. We’ve had a series of reports about the meetings, papers published in the ACM Digital Library, and an up-coming Special Issue of IP&M. In the run-up to this year’s event (stay tuned!), I decided it might be useful to consolidate these publications in one place. Hence, we now have the HCIR Publications page.
Last week we held the HCIR 2012 Symposium in Cambridge, Mass. This is the sixth in a series that we have organized. We expanded the format of this year’s meeting to a day and a half, and in addition to the posters, search challenge reports, and short talks, we introduced full papers reviewed to first-tier conference standards. I will write more about these later, and for details on other events at the Symposium, I refer you to the excellent blog post by one of the other co-orgranizers, Daniel Tunkelang.
In this post, I wanted to record my impressions of the keynote talk by Marti Hearst from UC Berkeley.
Exploratory search is an uncertain endeavor. Quite often, people don’t know exactly how to express their information need, and that need may evolve over time as information is discovered and understood. This is not news.
When people search for information, they often run multiple queries to get at different aspects of the information need, to gain a better understanding of the collection, or to incorporate newly-found information into their searches. This too is not news.
The multiple queries that people run may well retrieve some of the same documents. In some cases, there may be little or no overlap between query results; at other times, the overlap may be considerable. Yet most search engines treat each query as an independent event, and leave it to the searcher to make sense of the results. This, to me, is an opportunity.
We are happy to announce that the 2012 Human-Computer Information Retrieval Symposium (HCIR 2012) will be held in Cambridge, Massachusetts October 4 – 5, 2012. The HCIR series of workshops has provided a venue for discussion of ongoing research on a range of topics related to interactive information retrieval, including interaction techniques, evaluation, models and algorithms for information retrieval, visual design, user modeling, etc. The focus of these meetings has been to bring together people from industry and academia for short presentations and in-depth discussion. Attendance has grown steadily since the first meeting, and as a result this year we have decided to modify the structure of the meeting to accommodate the increasing demand for participation.
Update: This intern slot has been filled.
It’s intern season again! I am looking for a PhD student well-versed in persuasive/affective computing/captology literature to participate in a research project related to improving the quality of interaction in information seeking environments. The goal of the project is to explore how to increase people’s engagement with systems while performing exploratory search. We would like to improve our current system to make it more usable and to explore some novel interaction techniques.
Applicants should be familiar with basic tactics of designing affective and engaging interfaces in a web-based environment. The internship will last three months, and will be structured to produce and evaluate research systems. As a further incentive, we expect to publish the results of this work at CHI 2013, which will be held in Paris. For more information on the intern process, please see the FXPAL web site, or contact me directly. I would like to fill this internship slot as soon as possible.
I am seeing an interesting not-quite-yet-a-trend on the emergence of collaborative search tools. I am not talking about research tools such as SearchTogether or Coagmento, but of real companies started for the purpose of putting out a search tool that supports explicit collaboration. The two recent entries in this category of which I am aware are SearchTeam and Searcheeze. While they share some similarities, they are actually quite different tools.
Google recently unveiled Citations, its extension to Google Scholar that helps people to organize the papers and patents they wrote and to keep track of citations to them. You can edit metadata that wasn’t parsed correctly, merge or split references, connect to co-authors’ citation pages, etc. Cool stuff. When it comes to using this tool for information seeking, however, we’re back to that ol’ Google command line. Sigh.
Stephen Robertson’s talk at the CIKM 2011 Industry event caused me to think about recall and precision again. Over the last decade precision-oriented searches have become synonymous with web searches, while recall has been relegated to narrow verticals. But is precision@5 or NCDG@1 really the right way to measure the effectiveness of interactive search? If you’re doing a known-item search, looking up a common factoid, etc., then perhaps it is. But for most searches, even ones that might be classified as precision-oriented ones, the searcher might wind up with several attempts to get at the answer. Dan Russell’s a Google a day lists exactly those kinds of challenges: find a fact that’s hard to find.
So how should we think about evaluating the kinds of searches that take more than one query, ones we might term session-based searches?
HCIR 2011 took place almost three weeks ago, but I am just getting caught up after a week at CIKM 2011 and an actual almost-no-internet-access vacation. I wanted to start off my reflections on HCIR with a summary of Gary Marchionini‘s keynote, titled “HCIR: Now the Tricky Part.” Gary coined the term “HCIR” and has been a persuasive advocate of the concepts represented by the term. The talk used three case studies of HCIR projects as a lens to focus the audience’s attention on one of the main challenges of HCIR: how to evaluate the systems we build.