Trying to fight back the slow death of boredom on a long plane ride home from a long academic conference, I came across an interesting article by Jilin Chen and Joseph A. Konstan in last month’s Communications of the ACM. They look at the relationship between the acceptance rate and the “impact factor” of computer science conferences, where the “impact factor” is measured by the average number of citations within two years for papers published at the conference.
[A note for non-computer scientists: almost alone among academic disciplines, computer science does not put a strong emphasis on journal articles as a measure of research productivity. Most significant results are first published in "top" conferences like POPL, NIPS, and CRYPTO. Journals typically published expanded versions of conference papers, years after the fact.]
Unsurprisingly, the authors find a strong inverse correlation between the acceptance and citation rates. But there is something interesting here: the best papers at the most selective conferences are cited slightly less often than those that are merely highly selective.
The top-cited papers from 15%–20%-acceptance-rate conferences are cited more often than those from 10%–15% conferences. We hypothesize that an extremely selective but imperfect (as review processes always are) review process has filtered-out submissions that would deliver impact if published. This hypothesis matches the common speculation, including from former ACM President David Patterson, that highly selective conferences too often choose incremental work at the expense of innovative breakthrough work.
Alternatively, extremely low acceptance rates might discourage submissions by authors who dislike and avoid competition or the perception of there being a “lottery” among good papers for a few coveted publication slots. A third explanation suggests that extremely low acceptance rates have caused a conference proceedings to be of such limited focus that other researchers stop checking it regularly and thus never cite it.
This brings to mind a contentious discussion from last winter about the POPL reviewing process. POPL doesn’t even fit into the most selective category—its acceptance rate varies between 15 and 25%—but the consensus in the discussion seemed to be that too many good papers were being rejected for bad reasons, and that an acceptance rate in the 30-40% range could be adopted without significantly affecting the average quality of the accepted papers.
What would happen if every selective conference decided to increase its acceptance rate by 5-10%? It seems likely that the more selective conferences would “steal” the best papers from less selective conferences in the same area. The least selective conferences would be left with fewer worthwhile papers to publish. Perhaps counter-intuitively, the correlation between acceptance and citation rates would get even stronger.