September 18, 2018

Practical Peer Review

Attention conservation notice: An exhortation to the young to demonstrate a literally-academic virtue which I myself find hard to muster.

Written a few years ago, and excavated from the drafts folder because I was preaching the same sermon in e-mail.

Having found myself having to repeat the same advice with more than usual frequency lately, I thought I would write it down. This is the importance of grasping, or really of making part of one's academic self, two truths about peer review.

  1. The quality of peer review is generally abysmal.
  2. Peer reviewers are better readers of your work than almost anyone else.

The first truth will speak to itself for any academic — or, if you're just starting out, trust me, it will soon. Drawing a veil over reports which mere products of nepotism and intrigue *, referee reports are often horrible. The referees completely fail to understand ideas we've adapted to the meanest understanding, they display astonishing gaps in their knowledge, and lots of them can't (as my mother puts it) think their way out of a wet paper bag. Even if you discard these as mere dregs, far too many of the rest seem to miss the point, even points which we've especially labored to sharpen. Really good, valuable referee reports exist, but they are vanishingly rare.

The second truth is perhaps even more depressing. Even making all allowances for this, your referees have (probably) read your manuscript with more attention, care, sympathy and general clue that most other readers will muster. In the first place: most papers which get published receive almost no attention post-publication; hardly anyone cites them because hardly anyone reads them. In the second place: if one of your papers somehow does become popular, it will begin to be cited for a crude general idea of what it is about, with little reference to what it actually says.

I hope readers will forgive me for illustrating that last notion with a personal reference. My two most popular papers, by far, are both largely negative. (I wish this were otherwise.) One of them might as well have been titled "So, you think you have a power law, do you? Well, isn't that special?", and the other "A social network is a machine for producing endogenous selection bias". Naturally, a huge fraction of their citations come from people using them as authorities to say, respectively, "Power laws, hell yeah!" and "I can just see peer effects". It's actually not uncommon for those papers to be cited as positively endorsing techniques they specifically show are unreliable-to-worthless. This has put me in the odd position, as an anonymous referee myself, of arguing with authors about what is in my own papers.

None of this should be surprising. One of my favorite books is one of the very few thoroughly empirical contributions to literary criticism, I. A. Richard's Practical Criticism. In an experiment lasting over several years in the 1920s, Richards took a few dozen poems, typed up in a uniform format and with identifying information removed, and presented them to literature students at Cambridge University, collecting their "protocols" of reaction to the poems. It is really striking just how bad the students were at receiving even the literal text of the poems, never mind providing any sort of sensible interpretation or reaction. And these were, specifically, students of literature at one of the premier institutions of higher learning in the world. As Richards said (p. 310), anyone who thinks their alma mater could do better is invited to try it **. Poems are not, of course, scientific papers, and I don't know of anyone who's done a translation of Richards's protocols to academic peer review. But I know of no reason to think highly-educated people are systematically much better at reading papers than poems.

The moral I would draw from this is not to seek a world without referees. It is this: whatever your referees find difficult, confusing or objectionable, no matter how wrong they might be on the merits, will give many of your other readers at least as much trouble. Since science is not about intellectual self-gratification but the advancement of public knowledge, this means that we have to take deep breaths, count backwards from twenty and/or swear, and patiently attend to whatever the referees complain about. If they say you're unclear, you were, by that very token, unclear. If they say you're wrong, you have to patiently, politely, figure out why they think that, and re-express yourself in a way which they will understand. Anger or sarcasm, however momentarily gratifying (and wow are they momentarily gratifying) will not actually change anyone's mind, and so they do not actually serve your long-term goal of persuading your readers of your conclusions.

"When the referees have a problem, there's a problem" is, quite literally, one of the most ego-destroying lessons of a life in science, but I am afraid it is a lesson, and the sooner it's absorbed the better.

*: Vanishingly rare, in my experience, but I am here to tell you that it does happen, and that posting an arxiv version with an inarguable date-stamp before you submit is always a good idea. ^

**: Admittedly, this was before access to higher education exploded after WWII, thereby driving up the average intellectual level of university students, but replications in the 1970s were not noticeably more encouraging. (I would be extremely interested in more recent replications.) ^

Learned Folly

Posted at September 18, 2018 23:25 | permanent link

Data over Space and Time, Lecture 6: Optimal Linear Prediction

In which we see how to use linear models without assuming that they are correct, or that anything at all is even remotely Gaussian.


(R Markdown source file)

Corrupting the Young; Enigmas of Chance

Posted at September 18, 2018 22:50 | permanent link

Three-Toed Sloth