Search is Dead. Long Live Search!


Yesterday, the WWW 2010 conference featured a panel with representatives of Yahoo! (Andrei Broder – Fellow and VP, Search & Computational Advertising, Yahoo! Research), Bing (Barney Pell – Partner, Search Strategist for Bing, Microsoft), Google (Andrew Tomkins – Director of Engineering at Google Research), and academia (Marti Hearst – Professor, School of Information, University of California-Berkeley) on the current state of search on the web. The title was meant to be provocative, but I doubt that anyone in the room thought that this was a solved problem. I wasn’t at the conference, but was able to follow it on Twitter and through a video feed kindly provided by Wayne Sutton. A persistent recording of the event is available through qik by Kevin Marks, although the audio is rather faint. (Wayne’s feed had great audio, but the panelists were sitting down, and were blocked by the podium!)

The panel covered a lot of ground, and some of this has already been summarized by Jeff Dalton on his blog. In short, the big search engines are moving beyond the top 10 links and exploring additional capabilities — both in the ranking algorithms and in the style of interaction — to satisfy a broader range of information needs.

Vertical search

One interesting theme the panelists kept returning to was the relationship between the big search engines and specialized (vertical) sites. Broder and Tomkins agreed that while they could identify cases when a specialized way of presenting results might be useful, they could not devise a clean way for people to understand and interact with the complex data and interfaces. Instead, they were contemplating triggering transitions to vertical sites (e.g., a travel or a restaurant advice site) for those information needs. Tomkins pointed out that one challenge for managing these transitions from generic search to a specific vertical was how to hand off the context of relevant recent search activity on a search engine to inform subsequent interactions with a vertical.

Mobile applications (embodied by the iPhone and the Android) were also seen as alternatives to web search by providing a more focused and task-aware user experience that specialized the generic search capabilities available through a browser. This seemed related to the notion of micro-IR that Miles Efron and I have written about before.

The success of these applications was attributed to trust: while the big search engines are generally seen as trust-worthy, smaller verticals may not be as trusted. iPhone apps, on the other hand, were seen to benefit not only from the Apple brand, but also from the (some times draconian) control that Apple exercises over these tools. (“Steve Jobs personally approves all apps. No wonder he always looks so tired.”)

Social search

As might be expected, there was considerable discussion of social aspects of search (although, sadly, not of collaborative search). Marti Hearst offered the dichotomy between personalization and socialization, preferring the latter. It’s not clear to me whether using your data or your social network will produce higher diversity of results, but that’s an empirical question.

Tomkins suggested that user-generated content came in two flavors: content per se, generated by a relatively small number of people who had the “content creation gene,”  and a much larger population that was happy to create metadata (tagging, etc.). This metadata might be useful in guiding the recombination (composition) of content. Overall, though, the panelists’ sense was that this user-generated content, sliced, diced, and aggregated, would be more effective at generating effective recommendations (i.e., finding relevant content) than the opinions of (recognized) experts. While some queries might be best answered by someone you know, most of them could be crowd-sourced more effectively.

Not dead yet

My take away was that when the task started resembling exploratory search or decision-making, the big search engines were reluctant to tackle it for the fear of violating users’ expectations. (It should be pointed out, of course, that they spent the last decade establishing these expectations.) While it is often possible to extract some structure implicit in text (Bing does this for its Health service, for example), Broder pointed out that sometimes you can build quite efficient classifiers that are not intelligible (or meaningful) to users. (“All restaurants located on odd-numbered streets.”)

The issue of using schema structure to inform search was raised and dismissed: lightweight schemas might not improve search, whereas more detailed ones were much more difficult to construct and populate. That is, they remain the purview of specialized search engines that can devote the resources to craft the information architecture and the interaction to support specific user needs.

In short, the major search engines clearly recognize that the current systems do not meet a significant fraction of searchers’ information needs, but that general (horizontal) solutions to these problems are hard. This seems to be an opportunity for innovation and for weaning people off the existing style of information seeking. It seems that getting searchers more involved in the search process (as they are when searching for restaurants or researching travel plans) should, in the long run, create more people who are comfortable with exploratory search.

Share on: 


  1. Hi Gene

    Re: “In short, the major search engines clearly recognize that the current systems do not meet a significant fraction of searchers’ information needs, but that general (horizontal) solutions to these problems are hard. This seems to be an opportunity for innovation and for weaning people off the existing style of information seeking.”

    Read Jeff Dalton’s and your post and it is facinating that this topic is being discussed at the WWW though a lot of us have recognized it as an issue for some time.

    The problem as highlighted in your quote above is precisely what Xyggy and Bayesian Sets solves. For example, consider the corpus of academic papers. Today, academic papers can be found through a combination of text search and citations. However, this will not help you find all papers of interest in a particular area – from Jeff’s notes they call this “tail queries”. Search by citations is almost identical to PageRank and is at best a popularity contest – useful but not the whole story. Given a text query or (one or more) example papers, Bayesian Sets will find other similar (or relevant) papers.

    Additionally, the interactive search box facilitates refining the relevance of the results by dragging/touching items of interest as well as turning items on and off from the search.


  2. The citation graph (or, more generally, the link structure of the collection) is just one source of information used in exploratory search. Many other tactics are also available, as Marcia Bates’ seminar work on Berrypicking suggests.

    The thing to note about exploratory search is that it is unlikely to be satisfied by any single query or any single keyword set on which similarity might be computed. Rather, it is an iterative process through which searchers collect documents in small clumps, make sense of the documents, and then revise their understanding of the problem. This clumped, piece-meal, incremental approach is what gave rise to the berrypickng analogy.

  3. […] the details of her presentation, she did mention similar topics when she participated at a recent panel on search at the WWW2010 conference. The twitter streams from both events capture her […]

Comments are closed.