Is Computer Science so different?

on

There was an interesting article in CACM discussing an idiosyncrasy of computer science I’ve never totally wrapped my head around. Namely, conferences are widely considered higher quality publication venues to journals. Citation statistics in the article bear this perception out. My bias towards journals reflects my background in electrical engineering. But I still find it curious, having now spent more time as an author and reviewer for both ACM conferences and journals.

I think that journals should contain higher quality work. In the general case, there is no deadline for submission, and less restrictive page limits. What this should mean is that authors submit their work when they feel it is ready, and they presumably can detail and validate it with greater completeness. Secondly, the review process is also usually more relaxed. When I review for conferences, I am dealing with several papers in batch mode. For journals, things are usually reviewed in isolation. When the conference PC meets, the standards become relative. The best N papers get in, regardless of whether the N-1 or N+1 best paper really deserved it, as N is often predetermined.

Is this a good thing? Is CS that different from (all?) other fields that value journals more? On the positive side, there’s immense value in getting work out faster (journals’ downside being their publication lag) and in directly engaging the research community in person. No journal can stand in front of an audience to plead its case to be read (with PowerPoint animations no less). And this may better adapt to a rapidly changing research landscape.  On the other hand, we may be settling for less complete work. If conferences become the preferred publication venue, then the eight to ten page version could be the last word on some of the best research.  Or it may be only a tendency towards quantity at the expense of quality. Long ago (i.e. when I was in grad school), a post-doc in the lab told me that if I produced one good paper a year, that I should be satisfied with my work. I’m not sure that would pass for productivity in CS research circles today.

And this dovetails with characterizations of the most selective conferences in the article and elsewhere. Many of the most selective conferences are perceived to prefer incremental advances to less conventional but  potentially more innovative approaches.  The analysis reveals that conferences with 10-15% acceptance rate have less impact than those with 15-20% rate. So if this is the model we will adopt, it still needs some hacking…

5 Comments

  1. […] This post was mentioned on Twitter by Gene Golovchinsky, Ingo Frommholz. Ingo Frommholz said: Thanks, very interesting thoughts! RT @HCIR_GeneG: Posted "Is Computer Science so different?" by Matt Cooper http://bit.ly/cd4Gxa […]

  2. Ironically, bioinformatics journals turn around papers faster than computer science conferences. For instance, the Assoc. for Comp. Ling. conference is the biggy in my field of computational linguistics. Here’s the schedule for this year:

    Feb 15, 2010: paper submissions due

    April 20, 2010: notification of acceptance

    May 12, 2010: camera-ready copy due

    July 11-16, 2010: ACL conference

    Compare that with the turnaround time on papers in journals like Bioinformatics, where you see things like the first paper from this month’s journal:

    Received on February 15, 2010;

    revised on April 20, 2010;

    accepted on April 21, 2010,

    Advance Access originally published online on May 2, 2010.

    Here’s the paper’s page:

    http://bioinformatics.oxfordjournals.org/cgi/content/abstract/26/12/1481

    This paper’s not unusual.

  3. johnjbarton says:

    Yes CS is different: generally CS research publication venues are technologically antiquated. Physics (eg arkiv) and Biology (eg PLOS) are at least in the 21st century.

  4. When thinking about this, it is worth highlighting this line from that ACM article: “Highly selective conferences too often choose incremental work at the expense of innovative breakthrough work.” That would seem to be the key issue.

  5. Methodological rigor often trumps innovation. It seems difficult for some people to distinguish fatal flaws that invalidate the work from minor or unimportant deviations from orthodoxy that are not central to the message of a particular paper. But since low acceptance rates are taken as a de facto standard of excellence, there is pressure to reject rather than to accept.

Comments are closed.