Blog Archive: 2010

On non-anonymous reviewing

on Comments (8)

Some journals ask reviewers not to reveal themselves. A review process in which the reviewers are anonymous, unless they choose not to be, makes sense. But why shouldn’t reviewers be free to reveal themselves if they wish?

Twice, I have received non-anonymous reviews. In both cases, receiving the non-anonymous review was a thrill. Both reviewers were researchers I highly respected, and their positive opinion of my work meant a lot to me. In one case, the reviewer asked the journal editors to forward a signed review. In the other case, the reviewer sent me e-mail directly with the review attached. That review, while positive, had many excellent suggestions for revisions. Receiving the review more than a month prior to receiving the packet of reviews from the journal enabled us to get a head start on revising the paper, which was the reviewer’s stated reason for sending it to us directly.

I do not know why some journals prohibit reviewers from revealing their identities. Continue Reading

Can open source improve open reviewing?

on Comments (8)

Naboj is an overlay on that allows people to comment on articles, to rate articles, and (unlike to rate the reviews as well. Unfortunately, the rather minimal interface does not make it easy to organize the display by highly-rated reviewers or by thoroughly-reviewed papers (i.e., papers with reviews that others found useful), or restrict search to particular domains.

These limitations are not inherent in the design of the review process or the data collected on the site, but rather are probably indicative of an under-resourced effort. I wonder if an open-source approach to the design of these kinds of tools would result in a more usable (and thus more useful) way of managing an open peer review process. Is open source the way to open reviewing? I would certainly consider contributing to it.

Social bookmarking for academia

on Comments (2)

I’ve been going on and on in blog posts and in comments about the business of reviewing papers as a socially useful activity (given the right incentives) and how the reviews themselves should be rated to identify effective reviewers. The idea behind is not new—Amazon implemented something like this a long time ago—but it is useful to understand it better. This article by Jared Spool offers a good account of the history, the mechanics, and the effect.

Continue Reading

How I have used SciRate

on Comments (3)

I’ve been using SciRate for 2 1/2 years. I began using it with certain expectations, but my actual use has differed from those  expectations.

The simplest and most used feature of SciRate is its “Scites” button. With one click, SciRate members can vote for a paper. Initially I wasn’t sure how I should use this feature. What did my vote mean? Should I only vote for a paper I had read? Did it mean I could vouch for its correctness? Eventually my selfishness kicked in. SciRate made it easy for me to see what papers I had scited. And sciting a paper was so light-weight that it became the easiest way for me to mark papers that I wanted to come back to later. I don’t always come back to those papers, but I frequently use my list on SciRate to find a paper whose abstract I vaguely remember reading, or to find a set of papers it would be fun to read over the weekend.

Continue Reading The social side of arXiv

on Comments (3)

Continuing the “rant first, do research later” tradition of blogging, I had initially written about the publish->filter model of academic publishing not having heard of or of SciRate. (Technically, I had heard of, which became, but had never had the opportunity to use it.) Having been informed of the existence of these tools, I then related my experience with (here and here), and am now tackling offers a mechanism on top of for registering comments on papers. Interestingly,  provides links to a variety of bookmark aggregators, including CiteULike, Connotea, BibSonomy,, Digg, and Reddit, but not to SciRate. I wonder what politics drove that decision.

Continue Reading

Do not try this at home: playing with

on Comments (1)

Having written about reforming the academic publication process, and having suggested that be used to archive workshop papers for HCIR’09 (and ’07 and  ’08), I decided to upload (with the authors’ permissions) papers from the JCDL 2008 workshop on collaborative information seeking that I co-organized last year with Jeremy Pickens and Merrie Morris. I read the info on the site and decided to give it a shot. It turned out to be less straightforward than one might imagine.

Continue Reading