Thursday, November 16, 2017

Pre-Publication Review: Validity vs. Significance

A fellow researcher was recently telling me about their frustrating experience with a journal, in which their paper was rejected when reviewers said it wasn't "significant," but didn't actually bother to explain why they thought so.

This struck a chord with me, and made me think about the two fundamentally different ways that that I see peer reviewers approaching scientific papers, which I think of as "validity" and "significance."
  • "Validity" reviewers focus primarily on the question of whether a paper's conclusions are justified by the evidence presented, and whether its citations relate it appropriately to prior work.
  • "Significance" reviewers, in addition to validity, also evaluate whether a paper's conclusions are important, interesting, and newsworthy.

I strongly favor the "validity" approach, for the simple reason that you really can't tell in advance which results are actually going to turn out to be scientifically important. You can only really know by looking back later and seeing what has been built on top of them and how they have moved out into the larger world.

Science is full of examples like this:
  • Abstract mathematical properties of arithmetic groups turned out to be the foundations of modern electronic commerce.
  • Samples contaminated by sloppy lab work led directly to penicillin and antibiotics.
  • Difficulties in dating ancient specimens exposed the massive public health crisis of airborne lead contamination.

The significance of these pieces of work is only obvious in retrospect, often many years or even decades later. Moreover, for every example like these, there are myriad things that people thought would be important and that didn't turn out that way after all. Validity, is thus a much more objective and data-driven standard, while significance is much more relative and a matter of personal opinion.

There are, of course, some reasonable minimum thresholds, but to my mind that's all about the question of relating to prior work. Likewise, a handful of journals are, in fact, intended to be "magazines" where the editors' job includes picking and choosing a small selection of pieces to be featured. 

Every scientific community, however, needs its solid bread-and-butter journals (and conferences): the ones that don't try to do significance fortune telling to select a magic few, but focus on validity, expect their reviewers to do likewise, and are flexible in the amount of work they publish. Otherwise, the community is likely to be starving itself of the unexpected things that will become important in the future, five or ten years down the road, as well as becoming vulnerable to parochialism and cliquishness as researchers jockey and network for position in "significance" judgements.


Those bread-and-butter venues are the ones that I prefer to publish in, being fortunate enough that my career is not dependent on having to shoot for the "high-impact" magazines that try to guess at importance. I'm happy to take a swing at high-impact publications, and I'm happy to support the needs of my colleagues in more traditional academic positions, for whom those articles are more important.  My experience with these journals, however, has mostly just been about being judged as "not what we're looking for right now." So, for the most part, I am quite content to simply stay in the realm of validity and to publish in those solid venues that form the backbone of every field.

No comments: