Angst turns to anger to acceptance (of your lot, if not of your paper). Yes, it’s the CHI 2010 rebuttal period. A short few days to try to address the reviewers’ misreading of your paper before the program committee throws it into the reject pile, requiring you to rewrite it for another conference. While it is easy to find fault with the process that puts one or more person-years’ of work into the hands of “three angry men” who may or may not be in a position to judge the work fairly, it is not clear how to improve the process. James Landay recently wrote about the frustrations of getting systems papers accepted, and in a comment on that post, jofish pointed out that the concerns apply more widely because CHI consists of many small communities with different methodological expectations that are not necessarily appreciated by reviewers.
So how could you make this process more reliable? How could you make reviewers more accountable? One possible solution is to publish the reviews in addition to the paper, thereby bringing public attention to an otherwise private process with poor accounting for performance. To avoid conflict over paper reviews from escalating beyond the paper in question (which may well happen when people’s tenures are involved), reviews would need to be anonymous. Unfortunately, it is unlikely that the reviewers would remain anonymous if enough people see the reviews. Thus it seems improbable that any systematic solution of publishing reviews as a side effect of the current review process is possible.
The crux of the problem, as I see it, is that there aren’t enough qualified, motivated reviewers for most peer reviewed submissions at CHI, and elsewhere. One key reason for this, of course, is that the only reward for good reviewing is more reviewing. Thus instead of increasing the pool of qualified, competent reviewers, the existing system is forced to resort to large numbers of less-capable (and perhaps less motivated) reviewers, resulting in a rather noisy review process that often makes incorrect decisions, and occasionally quite spectacularly bad ones.
There remains the possibility of public, signed (or anonymous) reviews that characterize the publish-then-filter model. The scheme works like this: a paper is published on some open-access site such as arxiv.org. People read it and comment on it. Other people rate the comments. In the end, you have a set of high-quality comments that comprise the review of the paper. Authors are free to add new versions that address some of the comments, thereby improving the paper. A subset of these papers that receive positive reviews can then be selected for presentation at conferences. Reviewers who consistently write good reviews should be rewarded by recognizing another class of contribution in the tenure review process. Applied more generally, this approach should also shift credit for contributing to the field from journal editors to journal reviewers, and may shift the “brand” associated with quality scholarship from the journal to the reviewer.
Another advantage of this approach is that it dissociates the authorship of an idea from the vetting process. It admits a more graduated notion of importance than the existing filter-then-publish model permits. The existing model rejects as unpublishable (and therefore un-credited) ideas that have certain real or perceived flaws. But while the flaws may have nothing to do with the core idea of the paper, lack of publication denies authors credit for those ideas, and denies the community an opportunity to correct inaccurate impressions. Finally, to address James Landay’s point, it allows different communities to recognize the value of specific papers without the overhead of having to establish separate peer review processes. This is particularly useful for a field that is characterized by so many discipline-bridging efforts.