A paper presented at CHI 2009 described strategies and processes used by intelligence analysts. Among other aspects, the paper discusses collaboration among analysts, quoting one of their participants:
What I will not trust and put into my analysis is somebody else’s analysis. I need to know the source of the information and build on that so that I can put my level of trust in it and then it’s my name at stake when I provide an answer… I won’t trust their analysis until I look at the source of the information, and it will be, “Do I agree with the conclusions that they came to based on the facts and the evidence?”
This reminds me of a story told by Cathy Marshall as she interviewed analysts about their work practices. Sitting in an analyst’s cubicle, Cathy asks (in a suitably non-leading manner) about collaboration practices. “Oh, we work on our own stuff,” he says, pausing only to hand a bunch of documents to a colleague who stopped by: “Here’s the stuff I got for you, Bob.”
Poor self-reflection notwithstanding, analysts often do see the benefits of collaboration:
How you look at the data, how you twist the data and the assumptions you make, can lead you different ways. No matter how many different analysts you have, you’ll probably have some differences in analysis. It’s when all of our analysts get together and work out the differences and challenge each other with facts that we get to a better and more prominent answer.
This poses a challenge for deploying collaborative exploratory search applications into the intelligence community. Although there are clear benefits to collaboration among analysts with different domain expertise, systems that mediate such collaboration will need to be structured to account for the rationale for relevance judgments to help collaborators understand and thus trust the significance of the information.