Monday, December 10, 2012

The International Journal of Mystery

Hi folks... I had a lovely vacation away from the internet last week, and now I'm back with another batch of scientific philosophizing.  Lots of discussions of papers queued up, but that will keep a little longer...

Recently, a junior colleague of mine was telling me about a journal publication he's working on, and told me he was a bit concerned because he wasn't sure whether the journal was actually any good or not.  To my great shame, the first words out of my mouth were "What's the impact factor?"  To my astonishment, his immediate reply: "What's an impact factor?"

I've been thinking more about this since.  Could I not have done something even slightly more worthy than immediately falling back to the shared common bugaboo of science?  After all, I don't generally pay all that much attention to impact factor either, and certainly can't quote numbers for most of the places I've published.  Is it so odd for my colleague to not have known about impact factors?  Moreover, I receive pseudo-personalized invitations to publish in various international journals every day and I ignore most of them as academic spam without even bothering to look up their impact factors. How do I actually judge the quality of a journal when I'm deciding whether to submit there?

First, for those of you so fortunate as to join my colleague in his innocence, let me explain.  Impact factor is a number used as a way of measuring how important a scientific journal is to a field of research---and therefore as a proxy for measuring how important a piece of research is by the company it keeps.   It is typically calculated using three years of journal articles indexed by Thomson Reuters, as the mean number of citations  in a given year to articles that a journal has published in the prior two years.  You're probably already thinking of objections: Why count only citations from journals? Who the hell is Thomson Reuters and how do they decide what's indexed? Why two years - don't we care if things stand the test of time? Can't people manipulate the system?  These, dear reader, are only the tip of the iceberg and there's a long tradition of scientists deriding impact factor as a metric, making up new alternative metrics that address some of the problems while creating other new ones, and generally adding to the chaos of standards.  Nevertheless, impact factor, like Microsoft Word, is the lowest common denominator that many are forced to bow to, by their institutions, by their funders, by their tenure committees...

Let's avoid going any further down that tempting rathole of a discussion.

Instead, let's return to the question at the root of the whole discussion:
Is this journal any damned good?
First, off, what do we mean by "good" when we're talking about journals? In my view, this basically boils down to three things.  In order, from most to least important:
  1. Will my reputation be enhanced or tarnished by publishing here?  Some journals will add lustre to your work without anybody even reading it.  Rightly or wrongly, we primates love argument from authority.  Conversely, if you publish in a journal that's a total joke, people will wonder what's wrong with your work that you couldn't put it somewhere meaningful.
  2. Will my work be read by lots of people?  I believe that most articles will only ever be noticed, let alone read, by people who found them by Googling for keywords in a literature search.  And your close colleagues should know about your work because you talk about it together.  Each community, though, typically has one or two publications that people just read because they feel it represents the pulse of their scientific community.  Get into one of those and you'll be seen by orders of magnitude more readers.
  3. Will I be competently reviewed and professionally published? Amongst the great herd of middling journals, some are a pleasure to work with and some are a total train wreck.  In the end, though, if you get reviewers who give good feedback and the actual mechanics of publication are handled professionally, that's a nice bonus.
Ideally, impact factor ought to tell you about #1 and #2, but in practice I find it really only tells me about extreme highs.

So, what is it that I actually do in order to tell if a never-before-heard-of journal is any good?  Well, first I check the editorial board: Do I know them?  Do I know their institutions?  Of course, the really big names in a field are often not on boards, or on boards only ceremonially, since they're too busy.  I tend to look for the presence of solid mid-rank contributors and decent institutions---the sort of folks who I find form the strongest backbone of professional service.  But if nobody I've heard of in the field and nobody at reasonable institutions cares enough to help run the journal, then why should I think that publishing in a particular journal will make any impact?

If the editorial board hasn't convinced me one way or another, then maybe I'll check the impact factor, but really that's just a +/- test: if it has an impact factor of at least 1.0, that's a good sign, but a hazy one and not necessary, since many good venues have no impact factor and impact factor can be gamed.  More important is how long something has been around: anything that has survived at least a decade is likely to be solid (though again, not necessarily).

As for black marks: if a never-heard-of-it journal seems to have an extremely random or broad scope, then what could its community possibly be?  Those I always find suspicious, since it feels like they are just trolling for submissions.  Much worse than that, though, is if the publisher is a known bad actor, especially somebody who spams me repeatedly.  I'm sorry, Tamil Nadu, but your academic community will be forever tarred in my eyes by the people who fill my inbox with poorly targeted spam.

Sufficient? Hardly. But those, at least, are my own heuristics for dividing the worthy and the dubious when approaching yet another new journal. I suspect that this isn't a problem for people who don't do as much interdisciplinary work as I do, and that it was a lot easier a few decades ago when the number of journals was much lower. But think: if it's this hard to decide where to write, how much worse is the problem of finding what to read?  And that is a discussion for another time...

No comments: