Blog Category: scientific publishing

On non-anonymous reviewing

on Comments (8)

Some journals ask reviewers not to reveal themselves. A review process in which the reviewers are anonymous, unless they choose not to be, makes sense. But why shouldn’t reviewers be free to reveal themselves if they wish?

Twice, I have received non-anonymous reviews. In both cases, receiving the non-anonymous review was a thrill. Both reviewers were researchers I highly respected, and their positive opinion of my work meant a lot to me. In one case, the reviewer asked the journal editors to forward a signed review. In the other case, the reviewer sent me e-mail directly with the review attached. That review, while positive, had many excellent suggestions for revisions. Receiving the review more than a month prior to receiving the packet of reviews from the journal enabled us to get a head start on revising the paper, which was the reviewer’s stated reason for sending it to us directly.

I do not know why some journals prohibit reviewers from revealing their identities. Continue Reading

Exploring diversity of SIGIR

on Comments (1)

I have been curious about the evolution of research interests in the IR community for a while, and have recently decided to do something quantitative about it. My plan is to track how different aspects of the field wax and wane throughout the conference series. To start off, I decided to compare SIGIR 2010 with SIGIR 2000. This is an arbitrary starting point, but I wanted to do something topical (relevant?) to start.

Continue Reading

PLoS

on Comments (2)

Last week, I went to a SF Bay Area ACM Chapter talk by Peter Binfield and Sara Wood of PLoS, which they covered the motivation for establishing the PLoS journals, and talked about some of the challenges of running this operation. PLoS is a non-profit publisher of scientific and medical information that arose out a desire to reduce journal subscription costs to academic libraries.

PLoS publishes six specialized open-source journals, and one additional uber-journal, PLoS-One, that includes everything else. While they refer to themselves in print (and in the talk) as publishers of scientific research literature, they in fact appear to be focused more narrowly on the biomedical literature.

Continue Reading

Is Computer Science so different?

on Comments (5)

There was an interesting article in CACM discussing an idiosyncrasy of computer science I’ve never totally wrapped my head around. Namely, conferences are widely considered higher quality publication venues to journals. Citation statistics in the article bear this perception out. My bias towards journals reflects my background in electrical engineering. But I still find it curious, having now spent more time as an author and reviewer for both ACM conferences and journals.

I think that journals should contain higher quality work. In the general case, there is no deadline for submission, and less restrictive page limits. What this should mean is that authors submit their work when they feel it is ready, and they presumably can detail and validate it with greater completeness. Secondly, the review process is also usually more relaxed. When I review for conferences, I am dealing with several papers in batch mode. For journals, things are usually reviewed in isolation. When the conference PC meets, the standards become relative. The best N papers get in, regardless of whether the N-1 or N+1 best paper really deserved it, as N is often predetermined.

Is this a good thing? Is CS that different from (all?) other fields that value journals more? On the positive side, there’s immense value in getting work out faster (journals’ downside being their publication lag) and in directly engaging the research community in person. No journal can stand in front of an audience to plead its case to be read (with PowerPoint animations no less). And this may better adapt to a rapidly changing research landscape.  On the other hand, we may be settling for less complete work. If conferences become the preferred publication venue, then the eight to ten page version could be the last word on some of the best research.  Or it may be only a tendency towards quantity at the expense of quality. Long ago (i.e. when I was in grad school), a post-doc in the lab told me that if I produced one good paper a year, that I should be satisfied with my work. I’m not sure that would pass for productivity in CS research circles today.

And this dovetails with characterizations of the most selective conferences in the article and elsewhere. Many of the most selective conferences are perceived to prefer incremental advances to less conventional but  potentially more innovative approaches.  The analysis reveals that conferences with 10-15% acceptance rate have less impact than those with 15-20% rate. So if this is the model we will adopt, it still needs some hacking…

Not Relevant (but Useful)

on Comments (6)

Traditional models of academic publishing have been under attack from a number of directions, with factors such as the decreasing cost of publication and dissemination leading to the proliferation of online journals and alternative publishing models. One such alternative, straddling the border between  blog and refereed publication, is Not Relevant, a web site recently created by Ian Soboroff as a venue for publishing and discussing work related to information retrieval that might have been rejected by traditional publication venues.

The goal of Not Relevant is to provide a novel dissemination venue for research in information retrieval, particularly when that research does not fit well in existing channels. Not Relevant strives for open dissemination of research, to put that research into the wild quickly, and to foster open and public discussion of that research.

Continue Reading

Reviewer Operating Characteristic

on Comments (3)

David Karger made an interesting proposal on the Haystack blog about the efficiency of CHI reviewing. Using an ROC analysis of reviewer scores for CHI 2010, he found that when there is consensus between the first two reviews that a paper in question scores below 2 of 5, that there is no need to solicit a third review for it.  While this method would have caused the rejection of 6 of about 300 papers that weren’t actually rejected, it would save almost 500 reviews.

The question is, is this tradeoff worth it?

Continue Reading

SIGIR Reviews as Pseudo-Relevance Feedback

on Comments (18)

Some ACM conferences such as CHI offer authors an opportunity to flag material misconceptions in reviewers’ perceptions of submitted papers prior to rendering a final accept/reject decision. SIGIR is not one of them. Its reviewers are free from any checks on their accuracy from the authors, and, to judge by the reviews of our submission, from the program committee as well.

Consider this: We wrote a paper on a novel IR framework which we believe has the potential to greatly increase the efficacy of interactive Information Retrieval systems. The topic we tackled is (not surprisingly) related to issues we often discuss on this and on the IRGupf blog, including HCIR, Interactive IR, Exploratory Search, and Collaborative Search.  In short, these are all areas that could be well served by an algorithmic framework that supports greater interactivity.

Continue Reading

The TOCHI Option

on Comments (3)

Many people in the CHI community are aware of the range of problems associated with the CHI conference review process that tries to cram 1,300 or more submissions through a rather small reviewer pool with the goal of selecting the interesting and the important, while filtering out  inappropriate or unfinished work. Needless to say, the process is often imperfect.

There have been many laments and calls for change (e.g., here, here, here), and some recent positive changes in way to conference is run.

Continue Reading

Picking conferences

on Comments (7)

Selecting a venue for publishing your research is often a non-trivial decision that involves assessing the appropriateness of venues for the work, the quality of reviewing, and the impact that the published work will have. One of the metrics that has been used widely to estimate the quality of a conference is the acceptance rate. It is certainly one of the metrics that we at FXPAL use to decide where to submit papers.

But judging a conference by publication rate alone is not always appropriate: acceptance rates are only approximately correlated with paper quality, and vary considerably among conferences. For example, a marginal paper at a smaller conference such as JCDL is typically of the same quality as a marginal paper at a larger conference such as CHI, but the acceptance rates differ by about a factor of two. Acceptance rates for small or emerging conferences are often higher because the work is done by a smaller community and does not attract as many opportunistic submissions as the better-known conferences do.

So is there a more robust way to assess the quality of a conference?

Continue Reading

Does IP matter?

on Comments (1)

Panos Ipeirotis recently wrote about the confusing state of affairs with respect to intellectual property at his University. In some sense, this is ironic, since the whole point of a University is to produce intellectual property. But I suppose the question isn’t really one of production, but rather of distribution and of consumption. It’s clear that the faculty and students who develop the ideas should own (i.e., receive credit for) those ideas. But once an idea is published, how it gets used is a different story.

With others (e.g., Christopher Browne) I have often wondered why a public university (or a private one that receives significant federal funding for research) has any rights to patent the results of its research. After all, government employees are not allowed to patent the results of their work done for the government; why should government-funded work at universities be different?

Furthermore, does it matter to a University to hold patents, particularly software patents?

Continue Reading