Saturday, March 02, 2013

Is Paper-Writing Season Real?

As I mentioned in my last post, one of the things I just struggled my way through was a fierce batch of paper deadlines.  All told, there were eight paper deadlines in less than a month, meaning that even with excellent and responsible co-authors and triaging two papers, I still had a rather intense several weeks.

I feel like this sort of "paper-writing season" happens to me on a regular basis.  Certainly, every year around January/February feels like a time of madness, and there are other similar pockets of crunch time that show up at other times, though perhaps less consistently.  But is this phenomenon real, or just an artifact of my own time management and retrospective view on the matter?  Being a sucker for an occasional graph, and certainly for the ability to procrastinate on reviewing papers a little bit more, I made a list of the past year's worth of deadlines, both for conferences and workshops (which are generally regular in when they occur) and for journals and book chapters (which are generally irregular and presumably independent).  It looks like this:

Jake's Paper Deadlines, Mar. 2012 - Feb. 2013
This includes both those deadlines where I actually submitted something and those where I persistently care about and track the conference but did not actually submit (e.g., those triaged submissions from last month).  The journal and book chapter deadlines include all of the revision deadlines as well, so those publications contribute 1-3 deadlines to the collection, depending on how many iterations happened and how many were within the sample period, given the months- to years-long time scale for journal review and revision.  I didn't include the camera-ready deadlines from conferences and workshops, since the level of revision required for those is generally much more lightweight, and a few are even abstract-based, requiring no revision at all.

The verdict?  Well, let's see... I don't usually look for statistical significance in data sets this small or this poorly controlled, so it's going to take a little bit of work to figure out.  Usually I'm dealing with excessively large numbers of data points or nice tight distributions, and if I even have to ask whether a difference is significant, then it means that the result is probably too poor a quality for me to use in any case.  But the search for low p-values is practically a rite of passage in most disciplines, so I guess it's about time that I went on a p-value fishing expedition of my own.

Matlab's built in easy-bake significance-testing functions all seem to assume Gaussians, rather than the case we should be considering here, which is a uniform random distribution over months of the year.  So it's off to go spend a little quality time with Wikipedia, which has an excellent article giving pretty much exactly what I need.  Futz around with the numbers for a while, and I think I've managed to calculate things correctly... and it's surprising just how primitive and easy to screw up these tests are.  Bottom line, though, I think I've got my numbers correct, and they are giving me the following: conference deadlines are distributed randomly throughout the year (p=0.41) and journals are significantly non-random (p=0.018).

That first result is somewhat surprising, but I believe it.  Despite the occasional hell that is January/February, with six deadlines in two months, the actual month-to-month variation just isn't that high.  The second is a good example of why you should never believe a p-value without interrogating it fiercely.  You see, it happens that last year we submitted two papers to the same special journal issue in April.  If I drop just that single duplication, knocking the journal count for April from five down to four, we end up instead with p=0.19, an order of magnitude worse on the magical significance scale.  If you wanted, you could say that the significance test was doing exactly its job, and detecting that there was a non-random correlation; I would say, however, that it's clear that just a little bit of noise (a single doubled deadline) was enough to completely mess with our ability to ask the real question (are paper deadlines randomly distributed?), and persist in my stance that any effect that requires a significance test to see is a pretty weak effect, scientifically.

Bottom line, then, for this investigation, is that it appears that a) there's no point in speculating about interesting external effects that might cause my deadlines to bunch up, and b) just because the clumps come randomly doesn't mean they aren't likely to be seriously intense if I don't prepare for them well in advance, and that's hard to do when journal revisions are part of the mix.  There is a conference paper-writing season for me, it comes at the beginning of the year, and just a few randomly occurring journal interactions are likely to be enough to tip it over the edge from intense to excruciatingly stressful.

Do you have a paper-writing season, or similar deadline-fest in your own lives, dear readers?

No comments: