- Brexit is depressing, but the the #LiteraryBrexit tag on twitter is hilarious
- There are scientists who use the scientific method to study how science journals work. And their work is actually relevant.
- Interesting, yet depressing reading: a fact-file on right-wing terrorism in Germany
- Today’s AI is not intelligent. Thought so
Received September 5, online first June 5, and at least six more months until the piece is assigned to an issue and is paginated. A neat illustration of (some of) the problems with the current system. And no, I don’t have an easy solution.
Being part of the peer review system has a sadomasochistic quality. Nate Jensen’s story about how he had to submit a certain manuscript again and again to different journals to get it published eventually is all too familiar. I don’t keep records as exact as his (would be too depressing), but I remember a single straight accept. I also remember one supposed quick hit (not out in print yet – hope they don’t change their minds) that involved a two-year email conversation between us, a very diligent reviewer, and a baffled editor.
And then there is the cursed manuscript, for which the first analyses were run almost exactly nine years ago. The saga involves one journal that took extraordinarily long to reject and one that gave us an R&R after more than a year – one reviewer had died and so understandably failed to respond to the ever more urgent automated mails from the manuscript submission system. In the meantime, my co-author and I had temporarily lost the will to live and so just ignored the chance of an R&R, only to come back to the wreck of the manuscript three years later (make a wild guess: reject). We have now just submitted a second R&R to journal number four (there may have been another in between which I cannot remember), 14 months after our initial submission to this one.
I used to think that this strategy (affectionally known as “doing a Budge” in some circles for certain reasons) increases the overall likelihood of getting published even for mediocre manuscripts: If your chance of initial rejection is 0.9, four submissions should bring this down to a more agreeable . And if you lower your sights and begin to target outlets further down the academic food chain, your chances should be even better.
But this, of course, assumes that reviews/decisions are independent draws. They are not, as I have learned from my own reviewing: In a reasonably specialised subfield, the number of potential reviewers is small, and the number of people actually doing the bloody business is even smaller. In more than one instance, editors seem to have googled (or otherwise consulted databases or their email records) and have contacted both me and my co-author on a sufficiently
obscure specialised piece of work to judge something even more obscure specialised. If a reviewer does not like your work and rejects it, chances are that the same person and their friends will review its next iteration for another journal. And reviewers are over-burdened: Recognising that you have already seen this manuscript before is like getting a get-out-of-jail card. There is a serious temptation not to look for any improvements (and often, there are none).
Thankfully, there is a flip side to it. With the cursed manuscript, there is a kind, recognisable, approving voice, who had the bad fortune to review at least two and possibly three versions of the manuscript (not counting the R&Rs). As a reviewer, I recently had the chance to get a look at a manuscript that I had given a “minor revisions” before, but the journal had rejected it anyway. The author had dealt with my suggestions, shortened and streamlined the manuscript in the most appreciable way, then submitted it to a much better journal, for which I could now recommend to publish without revisions. This much more audacious strategy shall be known in some circles, for certain reasons, as doing an inverted Budge.
One particularly annoying aspect of doing reviews for learned journals is that assignments tend to arrive in clusters. Six months ago, I found myself in a bit of a pickle, with loads and loads of requests arriving within a short time. And just five weeks ago, another volley of invitations to review hit my mailbox within the space of hours, in one instance within minutes, which looked suspiciously like a flaw in the matrix. As these systems are fully computerised, automated and increasingly urgent reminders are now clustering in my mailbox, too.
This morning, I came around to read the first two of them, only to realise that I had already read and rejected them during the last campaign, when they had been submitted to other journals. As I would have done in their stead, the authors had addressed some minor points but left the basic structure as it was then, meaning that I could basically cut and past my old reviews of them. Something like this has happened to me in the past, but with a single manuscript and a two-year hiatus between those incarnations I reviewed. Getting this twice in a single morning is a little creepy. Now I’m looking forward to my afternoon reading.
Sixteen months ago, we started the Political Science Peer-Review Survey. This week, the input form was shut down. That is about three quarters of a year later than expected, but then again, I underestimated the fallout of my move back to Germany. Moreover, until a few weeks ago there was still a tiny trickle of replies coming in. So far, we have found few major problems with the data. The RA has spotted two instances where the respondent somehow managed to save the data at various stages of the interview, thereby inflating the number of respondents. Moreover, it’s amazing how many political scientists read ‘percent’ and give absolute numbers 😉
Right now, the RA is enjoying is well-deserved holiday. He’ll be back in four weeks time, and we hope to have a data set ready for distribution by June.
On Monday, the Political Science Peer-Review Survey had 506 respondents. Between Tuesday and Friday, we sent out 1,100 new invitations. Five days and many contacts with helpful colleagues later the number stands at 626. Feel free to join them.
If you edit, review or author manuscripts for political science journals, the peer-review process is at the centre of your professional life. Unfortunately, for most of us the process is largely a black box. While everyone has heard (or lived through) tales from the trenches, there is very little hard evidence on how the process actually works. This is why a number of colleagues and I started the peer-review survey project that aims at collecting information on the experience of authors, reviewers and editors of political science journals.
If you are an active political scientist, this survey is for you: we need your expertise, and your input is greatly appreciated. Filling in the form is fun and will typically take less than ten minutes of your time. It is also a great way to release some steam 🙂
Ready? Then proceed to the Political Science Peer-Review Survey.
We also put some (very) preliminary results of the political science peer-review survey online and will release further findings and eventually the data set in the future.
If you think this is worthwhile (and who wouldn’t?), please spread the word. To make this easier, we have created short URL for the survey (http://tinyurl.com/peer-review-survey) and the results (http://tinyurl.com/peer-review-results) that you can forward to your colleagues. Thanks again for your support. It is greatly appreciated.