Communicating about Collaboration: Depth of Mediation


Thus far in our series on Collaborative Information Seeking we have explored two dimensions: Intent and Synchronization. The next dimension is the Depth at which the mediation (aka support, facilitation) of the multi-user search process occurs.

We can talk about three levels of mediation: communications tools independent of the search engine (e.g., chat, e-mail, voice, etc.), UI-level mediation, and algorithmic mediation. The first level typifies most searching currently being performed on the web, whereas the other two are more commonly found in research prototypes.


At the shallowest level, users can collaborate using existing communications tools: Instant Messaging, voice chat, email, and wikis. Every query that one partner tries, or every relevant document that the other partner finds can be posted to a wiki page, sent via email, and so forth. Clearly, facilitation by the computer of searchers activities is happening; information is being communicated across the network (email, IM) and manually stored on jointly accessible content management platforms (wikis). But those tools are external to the actual search mechanism. It requires a lot of work for a user to update the wiki every time a new query is run or a new document is found. From our perspective, since the information retrieval system itself isn’t doing anything specific to help the users, and they must help themselves using external tools, this approach could be described as unmediated.

UI-level mediation

At a deeper level of mediation are tools like Merrie Morris’ SearchTogether and Colum Foley’s Físchlár-DiamondTouch collaborative searching.

In SearchTogether, all actions performed by all search team members are automatically logged and sorted in a shared session profile. When any team member marks a piece of information as relevant, that document or link appears in a shared view. The moment any team member views a search result, but doesn’t mark it as relevant, that information is stored as well. If that same document ever comes up as a result of someone else’s search, the system can then grey that document out, so that the team member is automatically made aware that it has already been viewed and evaluated by a collaborating partner. A searcher can also dole out unevaluated results in round-robin fashion. This enables more than one partner to participate in traversing a single ranked list. Finally, search team members can annotate their search activities, so that search meta-information is available to all collaborators.

In Físchlár-DiamondTouch, users share a common tabletop touch display in real time. Queries can be jointly formulated, with each team member adding information or perspective to the query. Results can be partitioned simply by grabbing from a common results bucket; if a search partner has already grabbed a particular result, it will no longer be in the bucket and you therefore automatically are kept from duplicating that effort and slowing your joint search down.

We call these types of collaboration systems UI-mediated. Whether the UIs are located on different computers (SearchTogether) or a shared multitouch computer (Físchlár-DT), the UI’s primary function is to facilitate effortless shared awareness of activities and results. No manual updating of wiki pages or the e-mailing of results is required. These systems also offer benefit in terms of joint maximization of retrieval effort. Depending on the exact nature of the UI tool, queries can be jointly formulated and result sets can be partitioned to share workload. However, the defining characteristic of these systems is that, from the perspective of the back-end search algorithm, all jointly-collaborating users appear to be a single entity. The search engine itself is not aware that more than one person is jointly formulating a query, or dividing up the results. Hence the name UI-mediation.


Finally, at the deepest level are systems in which the search engine itself has an active role in returning different types of results based on the presence of multiple search partners. We call these search systems algorithmically mediated. Examples include Colum Foley’s dissertation work and our system, Cerchiamo, and a variety of recommender and collaborative filtering systems.

In Colum’s work, the relevance judgments assigned to documents by one user affect (via synchronized influence) the ordering of the not-yet-seen documents in the second user’s queue, and vice versa. The Cerchiamo algorithm looks at unseen, low ranked results from user#1’s query history and combines that information with user#1’s relevance judgments to present user#2 with low-ranked documents that user#1 never got a chance to see. At the same time, the system offers dynamically changing (continually-updating) query suggestions back to user#1 based in large part on the relevance judgments made by user#2.

Common to both of these systems is that, apart from carrying out one’s regular querying and relevance-marking tasks, no additional user action is required to influence one’s partner. Even though partners are  collaborating explicitly, they do not have to make the additional effort of reading every document that their partner has found to alter their own information seeking behaviors toward a better outcome on the shared task. The search engine itself does it for them. The mediation is algorithmic. Interestingly, a recent study by Joho et al found that “the concurrent condition participants actually avoided viewing documents viewed by other group members and hence didn’t gain a complete understanding of the topic” compared to participants in the independent condition. (Thanks Sharoda!) This suggests that UI-only mediation may be sensitive to how the information is presented.

Similarly, user behaviors are collected by recommender systems (typically) without requiring additional work from them, and those behaviors are stored by the system to be aggregated and compared to others’ behaviors with the purpose of making recommendations.

Cumulative functionality

We should note that this dimension, unlike the other dimensions, does not contain mutually exclusive options. For example, in the intent dimension, two users cannot both implicitly and explicitly collaborate with each other; it is either one or the other. However, the Depth dimension is cumulative. Recommender systems, for example, combine recommendations (algorithmic mediation) with modifications to the interface (UI mediation). In general, a system that is mediated on one level is also tends to be mediated on all levels “above” it.

Share on: 


  1. Great post! It really helps me in my research to read this blog :)

    I’ve been thinking about the communication that occurs during collaborative Web search (such as chat messages or comments exchanged between group members) and wondering how these could be better leveraged to mediate search results. For instance, if I say in the chat “found a great restaurant in SF for Thai dinner” in a task where I’m planning a vacation in SF with friends, and the Web page of the Thai restaurant is open in my browser window, maybe the system can automatically rank that Web page as relevant/important to the task. If you know of any studies that look at communication during collaborative search, please let me know.

    Also, do you know of any systems for collaborative Web search that combine the UI and algorithmic approaches (apart from recommender systems)?

    (PS: Only now catching up on my blog-reading after CHI)

  2. […] is another good example of a system that supports explicit collaboration with interface-level mediation and synchronized data. It explores some interesting techniques to coordinate multiple users’ […]

  3. […] be understood both in simulations and in actual use. This research serves to inform the design of algorithmic mediation for collaborative search in the presence of symmetric roles. Other algorithms will be required to […]

  4. Alan Smeaton says:


    (yet another) contribution from me is a pointer to a system developed by colleagues in Dublin called Heystaks. This is a form of collaborative IR for web searching which is asynchronous, remote, has both UI mediation and algorithmic mediation in terms of “depth” and is somewhere between explicit and implicit in terms of “intent”. That’s where it fits into this evolving taxonomy of CIR. Let me explain how it works.

    A Firefox toolbar plugin records a user’s Google searches and pages viewed from the search ranking, storing them in a Stak. Each user can have a number of staks corresponding to long-lived information needs – my own staks are a defaut stak, a stak on technology, a stak on sentiment analysis, a stak on HCI, etc. – and staks can be private or shared. I tap into and use a couple of dozen staks from others and my searches contribute to staks, either my own or the ones started by others.

    When a user does a Google web search, the query is searched for in all available staks and previously viewed pages, viewed from the search ranking, are promoted in the re-ranking of the Google search. A user can give an explicit thumbs down to a document. A set of icons are associated with each ranked item promoted, or demoted, along with other visual representation of things like query terms used to retrieve it.

    So it is asynchronous and remote in terms of when users contribute, and benefit; the system mediation is in the UI (the icons) and in the re-ranking, and the intent isn’t an exact shared identical information need nor is it collaborative filtering – its something in-between. Could this mean that the “intent” dimension could have 3 values ?

    Its available at

    – Alan

  5. […] support for collaboration, but nothing like SearchTogether or that Alan Smeaton commented […]

  6. It seems like HeyStaks supports multiple kinds of interaction, along the lines of SearchTogether. Both systems allow users to contribute documents independently, but the system synchronizes the data so that the latest documents are available to all people using a given Stak.

  7. […] Marti Hearst’s new book, Search User Interfaces, is out, as Daniel Tunkelang reported earlier. The book covers a range of topics related to interaction around information seeking, including topics such as design, evaluation, models of information seeking, query reformulation, etc. It also discusses emerging trends: Mobile Search Interfaces, Multimedia (although this field has arguably been around long enough to no longer be emerging),  Social Search, and natural-language queries. The Social Search section discusses collaborative filtering, recommendation systems, and collaborative search, describing several systems along the full range of depth of mediation. […]

  8. […] (from the forthcoming introduction) we characterize the papers in the issue with respect to intent, depth of mediation and data synchronization dimensions. Figure 1. Intent vs. Depth of mediation for a given task. Grey […]

Comments are closed.