Publishing Possibilities

on

The debate over scholarly publishing continues to percolate along, with an article in CACM by Lance Fortnow, and in a recent blog post by Daniel Tunkelang on The Noisy Channel and in the subsequent comments. The issue in question is whether the established peer-review process is effective and efficient at identifying good work, or whether the peer-reviewed journal or conference is an artifact of a time when the costs of publication and distribution were high. The argument for online publication is certainly compelling; there are many free online journals, and book publishers such as Morgan Claypool are already publishing digitally or primarily digitally.

Clearly the mechanics of publication and distribution can be digitized easily. The question now is whether the publication model should be filter->publish, or publish->filter.* That is, whether publishers should solicit reviews and filter submissions to publish only those works that get the approval of a few reviewers, or whether (almost) everything should be published, with a collaborative filtering process applied by the readers to assess the merits of the work. In other words, should the measure of quality be obtained prior to publication, or some time after? In some sense, we already measure the quality of the work after publication by assessing its impact, and the impact of the venue in which it was published.

So how do we implement the publish->filter model?

Journals

Journals can adopt the Amazon model: all the articles are available, along with reviews and ratings. Although this seems somewhat wacky, it might be made to work, assuming the journal is a non-profit (or not-for-profit) entity. One interesting question is whether a journal could charge a nominal fee for publishing an article.

Another issue with the publication process that needs to be addressed is the reviewing aspect. While the publish->filter model may be interpreted to mean that reviewers are not necessary, I would argue quite the opposite. The more opportunity there is to publish, the more valuable are the reviews. A good movie critic can describe a movie in a way that helps viewers much more efficiently than a even large group of movie-goers, and thus can make the merits or deficiencies of a movie more apparent to the larger population. The industry depends on them.

Similarly, an academic critique model that couples open publication with a professional review corps can be an effective model for journals, and perhaps for conferences as well.  To make this work, however, the efforts of the reviewer (critic) have to be rewarded. We need to create a mechanism for people to receive credit for reviewing, not only in quantity but also in quality. Perhaps we can borrow from Amazon here as well: to improve the usefulness of its product reviews, Amazon has created a review rating system. Any reader can rate any review for utility, and can select to see only the highly-rated reviews. This mechanism can be used not only to find quality publications, but also to identify quality reviewers at the same time.

You could even imagine a model where all papers get “published” in some trivial way by being added to a repository, and then reviewed by various people. Rather than using the publisher’s brand to reflect the value of the publication, the reviewer’s brand could be used instead. Metrics could be collected for each reviewer regarding the correctness and utility of the review that could then be used to rate reviewers. This kind of a scheme would be both authoritative and democratic, in the sense that good reviewers would be recognized as contributing value, but the barrier to becoming a good (and known) reviewer would be very low. This model differs from that advocated by this article in that it accepts all manuscripts for publication in a centralized fashion (rather than piece-meal through different journals), and in that it involves for a systematic open review system.

Conferences

Conferences are harder to transform, because they happen in a constrained physical space and time. You have to decide who gets to publish and/or present. It is possible to allow more presenters by focusing on oral or poster presentations, but parallelizing a conference too much may make it difficult to see what you want to see, and reduce the quality of interaction you get with presenters. Making presentations too short to increase the number of speakers may make it even more difficult to put together a coherent presentation, particularly if you don’t have good presentation skills.

In addition, at least in computer science and related disciplines, one  reason people publish in major conferences is their selectivity. If anyone can get a paper in, or if all papers are just oral posters, the value of a presented paper as an indicator of achievement is diluted. This in turn may affect people’s willingness to attend conferences, or of funding agencies to pay for the associated expenses. The current expectation (predicated on low acceptance rates) in many organizations is that authors’ expenses to attend the conference will be covered only if a paper is accepted. For example, at FXPAL we are encouraged to submit papers to prestigious ACM and IEEE conferences, but typically won’t get approval for HICSS.

The assumption that conferences with low acceptance rates publish better papers is based on correlation, but in my opinion, the correlation is high enough to merit the practice. We have all had good papers rejected by one conference only to be accepted with high reviews in another (or even in the same one the following year). While annoying at the time, this does not greatly impact either the authors’ career or the conference’s credibility. In the end, the good papers get published anyway.

Perhaps one way to combine the merits of the open review process with the selectivity of the conference is to adopt a model similar to the one started recently by TOCHI, in which articles accepted for publication in the journal may also be presented orally at the subsequent CHI conference.  In the open publication model described above, this could be translated into a conference selecting some recently-published well-reviewed papers for oral presentation (assuming the authors are interested in oral presentations). Thus a conference could be seen as an extra  level of achievement beyond just publication, making it a useful metric for assessing a person’s academic output.

Workshops

A persistent decline in conference attendance rates can reduce the financial viability of a conference, thereby reducing or eliminating opportunities for the valuable face-to-face interaction we all expect of conferences. Workshops can fill this role, but require more travel per unit face time (one or two days at an event rather than three, four, or more), and are harder to justify to get funding because their impact is unclear. Also, workshops are limited in size and therefore are more suitable for emerging fields rather than for established ones. Still, workshops can serve as focal points for gathering reviews on a group of papers related to a particular theme, and can attract the attention of reviewers who did not participate in the workshop.

Challenges and opportunities

My guess that to change the way that we as a community publish would depend on those who no longer need to publish (e.g., faculty with tenure) to establish new models of scholarship that those early in their careers can follow to achieve recognition. It is unreasonable and unfair to force students and junior faculty to experiment with new publication processes when their performance is assessed by conservative tenure review committees. Those willing to experiment will be at a disadvantage compared with those who choose the traditional approach to publishing. The challenge, then, is how to achieve critical mass in an alternative publication mechanism; the opportunity is to transform for the better the way we communicate the results of our research.

Share on: 

5 Comments

  1. One recommendation Lance makes is to “use archive sites as the main method of quick paper dissemination.” Physics and math are way ahead of computer science in this respect, and their publication venues have been forced to accept arXiv posting as accepted practice. Stuart Shieber has a provocative post on the “don’t ask, don’t tell” approach many authors and publishers take towards online distribution of papers.
    http://blogs.law.harvard.edu/pamphlet/2009/06/18/dont-ask-dont-tell-rights-retention-for-scholarly-articles/
    Part of the “growing up” as a field that Lance advocates would be to confront this issue head on, and to reconcile practice with policy.

  2. I am looking into using arXiv to publish proceedings from a workshop we ran last year. The thought is to set up a slippery slope, and push ACM down that slope :-)

  3. […] written about reforming the academic publication process, and having suggested that arXiv.org be used to archive workshop […]

  4. […] to offer an Amazon-style “is this comment useful?” feedback option, through which the quality of comments can be rated. Comments can be useful to draw attention to interesting work, to discuss […]

  5. […] remains the possibility of public, signed (or anonymous) reviews that characterize the publish-then-filter model. The scheme works like this: a paper is published on some open-access site such as arxiv.org. […]

Comments are closed.