The skilled adversarial reviewer can find reasons to reject any paper without even reading it. This is considered truly blind reviewing. [Cormode, G.]

Many conferences request that submitted papers be anonymized by removing the authors’ names, tweaking the references, removing mentions of the authors’ organization in the paper, etc. The goal of the double-blind review process is to reduce the bias (positive or negative) that reviewers might have based on their knowledge of who wrote the paper. SIGIR, for example, included the following on their submission page for the 2013 conference:

Anonymity. SIGIR reviewing is double-blind. Therefore, please anonymize your submission. This means that all submissions must contain no information identifying the author(s) or their organization(s): Do not put the author(s) names or affiliation(s) at the start of the paper, anonymize citations to and mentions of your own prior work that are directly related to your present work, and do not include funding or other acknowledgments. For example, if you are using your product that is well known in the domain and you think it will be easy for an expert to identify you or your company, we recommend that you use another name for your product (e.g., MyProduct_ABC, MyCompany_ABC). If your paper is accepted, then you will replace the original name in the final version for the proceedings.

Papers that do not follow the above Style, Language, Anonymity instructions will be rejected without review. [emphasis mine]

And, apparently, in some cases, they followed through on this policy. In my opinion, this is too harsh.

First, a desk rejection, given at the end of the review process, for a trivially-correctable error such as including the name of the company or product in the paper is just plain mean. It doesn’t serve the community, it doesn’t serve the authors, it doesn’t serve the SIGIR brand. It just serves bureaucracy. Furthermore, I am certain that this policy was not applied consistently, and that a number of papers (mine included) that were accepted for review did not meet the strict interpretation of this rule.

Second, truly anonymizing self-references is just as likely to cause reviewers to complain about lack of relevant citations as it is to reduce the alleged bias. And if the reviewers do, in fact, know which paper was cited, then what’s the point of making it anonymous?

Third, while in some cases it is certainly possible to obscure the authors’ names and institutional affiliation, in situations in which the submitted paper builds on prior work by that same group, it becomes very difficult to preserve anonymity and coherence at the same time. Sure, I can change the name of my system from Querium to Dhrevhz, but once I include a screenshot, any reviewer who has the background to review the paper will know which system is being described.

The upshot of all of this is that true double-blind anonymization is difficult to achieve, and its necessity is unclear. Many conferences run review processes in which reviewers know the identity of the authors, and I haven’t seen any studies that attribute quality differences to this factor. Most journal reviewing also does not use double-blind anonymity and that process seems to work just fine. I think it’s time to stop worrying about anonymity and focus instead on more fundamental aspects such as  novelty, creativity, thoroughness, etc.

Share on: 


  1. There are strong arguments for double-blind, and reasonable ones against it; the same goes for both single-blind types, and for totally open reviewing. Whichever ones ideals / ones funding source / the weather might lead one to favour at the moment, it’s absolutely critical to have a mix of all these review styles.

  2. I agree that different situations call for different review strategies. Let’s focus on ACM conferences, something I am familiar with both as a meta-reviewer, a reviewer and as an author. In my experience, double-blind reviewing doesn’t guarantee any kind of anonymity. Thus, while keeping it may be a matter of convenience, strong enforcement (e.g., as SIGIR seems to have done this year) of such policies does not serve any platonic ideal, and may, in fact, be impossible to implement with sufficient accuracy in a field that encourages a significant amount of incremental work.

  3. What about CHI & UIST? Both are respectable ACM conferences that in turn have a relaxed policy regarding anonymity:

    I believe this model is a fairly good tradeoff.

  4. Yes – agreed. It definitely doesn’t guarantee anonymity, and is certainly impractical in the context of work that presents e.g. additions to existing resources. The “mental gymnastics” required to try and achieve some of this anonymity can be very awkward and quite embarrassing for both writer and reader. So as a result, it’s absolutely unfair and preposterous to reject a paper that’s made the initial cursory effort to be anonymised (e.g. “X (1990) showed” instead of “We showed (X, 1990)”). In fact it seems ridiculous to the point that one might even question whether the reviewer not only saw through the anonymisation but then also acted unprofessionally and asked for removal of a paper because they recognised it as coming from a competitor. In any case, flat out rejection shouldn’t be an option on the table for anything but the most careless of blunders.

  5. Back when I reviewed for theory conferences, they weren’t double blind, and several of my colleagues asserted that was a strength. Because of page limits, conferences papers necessarily left out details, making it essentially impossible to verify the results were true. Knowing that the author was a figure with a track record provided the confidence to accept the work.

    One can argue that this is not fair; that one should treat all submissions the same. This may be correct from the perspective of the authors. But from the perspective of advancing the field, the given approach may have provided a better way to get important results published.

  6. My sense is that a two-tier reviewing system in which the meta-reviewers are paying attention is a reasonable compromise between the Bayesian expectation that people who did good work in the past are more likely to be doing good work now, and the unbiased review of the work as presented. There is nothing wrong with anonymity per se, but the slavish application of rigid rules does not serve the interests of the community or of individual authors.

Comments are closed.