Selecting a venue for publishing your research is often a non-trivial decision that involves assessing the appropriateness of venues for the work, the quality of reviewing, and the impact that the published work will have. One of the metrics that has been used widely to estimate the quality of a conference is the acceptance rate. It is certainly one of the metrics that we at FXPAL use to decide where to submit papers.
But judging a conference by publication rate alone is not always appropriate: acceptance rates are only approximately correlated with paper quality, and vary considerably among conferences. For example, a marginal paper at a smaller conference such as JCDL is typically of the same quality as a marginal paper at a larger conference such as CHI, but the acceptance rates differ by about a factor of two. Acceptance rates for small or emerging conferences are often higher because the work is done by a smaller community and does not attract as many opportunistic submissions as the better-known conferences do.
Citation analysis is a somewhat controversial measure that is also more biased toward more established venues, and furthermore has a considerable time lag. Impact as a short-term aggregate citation rate for articles published in a conference is also not without its problems, particularly for smaller disciplines. For example, the numbers shown in the CiteSeer reports are hard to compare directly because they represent a range of sub-disciplines with different volumes of contribution.
I would like to propose an alternative that is robust to conference size and can handle emerging conferences. The idea is to use the reputation of authors to assess the prestige and/or importance of conferences. Prestigious conferences are those in which authors with good reputations chose to publish. Thus if we know who the important authors are, we can aggregate over their publications to derive a measure of quality of a conference that is independent of its age, size, etc.
The problem is how to establish which authors to select.
I can think of several ways to bootstrap this process:
- Take a poll to see which few conferences are considered prestigious, and use authors from those conferences as seeds.
- Select some “seed” authors who have published frequently over a long period of time, and use those authors to identify conferences, and then include other authors from the same conferences. PoP (via Google Scholar) is another potentially useful source of such authors.
- Start with a few low-acceptance rate conferences to select the set of authors, and then grow to other conferences.
Once a reasonable set of authors is selected, we can score conferences by the numbers of papers each other published in each conference, normalized by the publication rate of the author and by the size of the conference. A classifier could then be trained on a couple of years’ worth of conferences to see how well it predicts subsequent years. This test will be particularly interesting when new conferences (e.g., WSDM, IIiX) emerge.
These are just preliminary ideas, and I don’t know if the system will be stable or will expand to select all conferences. Another factor to consider is whether authors’ reputation should be filtered through some kind of a time-sensitive kernel so that people who move out of a field don’t continue to affect it.
It would also be interesting to compare the effectiveness of this approach for disciplines other than HCI/CS to see if different publication volume and authorship models affect the results. For CS and HCI, the data should be readily available through DBLP or CiteSeer; I am not sure about other disciplines. It might also be interesting to model the transmission of reputation to students or co-authors, although some of the student-adviser data might be hard to obtain.
I am planning on experimenting with these ideas, but I am waiting for someone to tell me that this has already been tried 30 years ago and declared no more effective than any other measure of quality that has been applied to publication venues.