Archive for the ‘Research’ Category
At a PARC Forum a few years ago, I heard Marissa Mayer mention the work they did at Google to pick just the right shade of blue for link anchors to maximize click-through rates. It was an interesting, if somewhat bizarre, finding that shed more light on Google’s cognitive processes than on human ones. I suppose this stuff only really matters when you’re operating at Google scale, but normally the effect, even if statistically-significant, is practically meaningless. But I digress.
I am writing a paper in which I would like to cite this work. Where do I find it? I tried a few obvious searches in the ACM DL and found nothing. I searched in Google Scholar, and I believe I found a book chapter that cited a Guardian article from 2009, which mentioned this work. But that was last night, and today I cannot re-find that book chapter, either by searching or by examining my browsing history. The Guardian article is still open in a tab, so I am pretty sure I didn’t dream up the episode, but it is somewhat disconcerting that I cannot retrace my steps.
The prolific Jaime Teevan has decided to blog, as evidenced by the creation of “Slow Searching” a few weeks ago. In a recent post, Jaime wrote about some ways in which Twitter search differed from web search, among which she included monitoring behavior, running “the same query over and over again just to see what is new.” Putting on my Lorite hat for a minute, this seems quite similar (albeit on a different timescale) to the “pre-web” concept of routing or standing queries. At some point, later, Google introduced Alerts, which seemed to be its reinvention of the same concept. And of course tools like TweetDeck make it much easier to keep up with particular Twitter topics.
We are looking for an intern to work with us this summer in the area of social media analysis. The project will involve understanding and mining patterns within Twitter data, in both text and images. An ideal candidate is a PhD student with strong machine learning skills. Prior experience in image understanding, text data mining, social network analysis, or statistical modeling is a plus. If you are interested in this project, please send your CV to Dhiraj firstname.lastname@example.org or Francine email@example.com.
One of the things we did slightly differently in this year’s HCIR Symposium was to introduce full-length, pier reviewed, top-tier conference-quality papers. We received a number of submissions, each of which was read and discussed by three reviewers. We then rejected some of papers, and sent several back for a rewrite-and-resubmit cycle. In the end, we accepted four papers, which have now been published in the ACM Digital Library.
Last week we held the HCIR 2012 Symposium in Cambridge, Mass. This is the sixth in a series that we have organized. We expanded the format of this year’s meeting to a day and a half, and in addition to the posters, search challenge reports, and short talks, we introduced full papers reviewed to first-tier conference standards. I will write more about these later, and for details on other events at the Symposium, I refer you to the excellent blog post by one of the other co-orgranizers, Daniel Tunkelang.
In this post, I wanted to record my impressions of the keynote talk by Marti Hearst from UC Berkeley.
Open source plays an important role in a research laboratory like FXPAL. It allows our researchers to focus their energy on their own innovations and build on the efforts of the community. Open source projects thrive when many openly contribute their work for the common good. However, FXPAL has a business imperative to protect its innovations. We believe that we have found the balance between contributing back to the open source community and protecting our innovations.
Thus we are happy to announce that we have open sourced DisplayCast using a liberal NewBSD license. DisplayCast is a high performance screen sharing system designed for Intranets. It supports real time multiuser screen sharing across Windows 7, Mac OS X (10.6+) and iOS devices. The technical details of our screen capture and compression algorithms will be presented at the upcoming ACM Multimedia 2012 conference. The source code is hosted at GitHub. We provide two repositories: an Objective C based screen capture, playback and archive component that targets the Apple Mac OS X and iOS platforms, and an .NET/C# based screen capture and real time playback component that targets Windows 7.
We hope others find DisplayCast useful and that they will release their own innovations back to the open source community. FXPAL will continue to open source relevant projects in the future.
Thanks to Frank Nack and Marc Bron, last week I had the opportunity to give a talk in The Netherlands at a NWO CATCH event organized by BRIDGE. NWO is the Dutch national research organization; BRIDGE is a project that explores access to television archives; and CATCH stands for Continuous Access To Cultural Heritage, which is something like an umbrella organization. The meeting was held at the Netherlands Institute for Sound and Vision in Hilversum, a rather interesting building.
Although it was a long way to go for a one-day event, I am grateful to Frank and Marc for the invitation, for their efforts as hosts, and for all the great discussion during the talk, in the breaks between sessions, and, of course, over beers in the evening. It’s great to be able to make such connections; hopefully more collaboration will follow.
For those interested, here are the slides of my presentation, which expands a bit on my earlier blog post about using the history of interaction to improve exploratory search.
Exploratory search is an uncertain endeavor. Quite often, people don’t know exactly how to express their information need, and that need may evolve over time as information is discovered and understood. This is not news.
When people search for information, they often run multiple queries to get at different aspects of the information need, to gain a better understanding of the collection, or to incorporate newly-found information into their searches. This too is not news.
The multiple queries that people run may well retrieve some of the same documents. In some cases, there may be little or no overlap between query results; at other times, the overlap may be considerable. Yet most search engines treat each query as an independent event, and leave it to the searcher to make sense of the results. This, to me, is an opportunity.
We are happy to announce that the 2012 Human-Computer Information Retrieval Symposium (HCIR 2012) will be held in Cambridge, Massachusetts October 4 – 5, 2012. The HCIR series of workshops has provided a venue for discussion of ongoing research on a range of topics related to interactive information retrieval, including interaction techniques, evaluation, models and algorithms for information retrieval, visual design, user modeling, etc. The focus of these meetings has been to bring together people from industry and academia for short presentations and in-depth discussion. Attendance has grown steadily since the first meeting, and as a result this year we have decided to modify the structure of the meeting to accommodate the increasing demand for participation.