Is the peer review process all it’s cracked up to be?

Recently I have been going through the laborious process of responding to peer reviewer comments for papers developed as part of my PhD. At the same time, earlier this year I was asked to provide peer review feedback on a number of other papers.  And it’s had me thinking about the peer review process…

I’m fully in support of the peer review process. I’ve often received far more useful and comprehensive feedback from my peer reviewers than I have from my supervisors, who aren’t as familiar with my subject matter.  These comments have seen me try out different statistical procedures for analysing my survey data, reflect more about the literature in my area and improve the presentation of my papers.

However, I’m not convinced that the peer review process does as much as the scientific literature suggests it does. There is the perception that the peer review process is sufficiently rigorous to prevent substandard science being performed. But this depends on a number of factors aligning.  Firstly, it depends on scientists accurately re-stating their methods and analysis. Secondly, it depends on peer reviewers being appropriately familiar with methods to be able to decide whether they are appropriate, and thirdly it depends on the science actually being performed to a standard that reflects the methods and analysis.  Finally, there is just not the opportunity available to determine whether the methods are those that are most appropriate or those that derive a statistically significant ‘p’ value – i.e. it’s often impossible to determine whether ‘p hacking’ has been undertaken, and what the implications of this are for the research if it has occurred.

In my own experience, some of my peer reviewers have provided excellent feedback on my methods. But in most cases the feedback I receive is based on the presentation of my results.  And maybe this is OK.  In the past I’ve been happy to just provide comments on presentation where I have been confident in the papers that I have reviewed. So perhaps just providing comments on presentation indicates a level of satisfaction with the methods and analysis.  However, I understand that peer reviewing for publications is being considered another employment metric that can be used to support annual report targets and promotion applications.  Using peer review statistics as an employment metric would push academics into peer reviewing as many papers as they can get their hands on – pushing them to review papers that are outside their research area or that they don’t have time to consider fully.  I have been asked to peer review a number of research articles that are completely unrelated to my field of expertise – from complex economic models to a microbiology methods paper.  I duly return all of these to the editor as I’m unable to provide useful advice – but how many other people accept the opportunity to peer review, even if they shouldn’t?  From the growing number of people who say they receive presentation-based advice alone, rather than any useful advice on theory, methods or analysis, I would say that instances of inappropriate peer review are on the rise.

On the other hand, I know all the dirty little truths in my research that peer review could never uncover.  Luckily for me my research was relatively straightforward and there aren’t a lot of these.  In fact, the only one I can think of is that I worked through three similar sets of statistics data, all indicating roughly the same thing, until I decided which of the three datasets gave me the most robust findings to support my own research.  But I do worry about the replicability of my research – not just whether someone else would come to the same conclusions I have but whether I could reproduce my own statistical analysis if push came to shove.  Between performing the statistics on mock data, again on real data and then changing the statistics in response to peer review comments I’m just not 100% sure of what I did.  I could definitely reverse engineer my methods from my notes, but is that good enough?  I don’t think so.

So how valuable are peer reviews? Well, I think they are incredibly useful, but probably not the vital step in ensuring confidence in research outcomes that many would believe them to be. What do you think?

Advertisements

2 comments

  1. Nicely said Gen. I’m also concerned with the absence of double blind peer review. Unfortunately academia is inherently competitive, I think double blind reviews would take some of the ego out of it (although of course in small fields like mine (and yours?) it becomes pretty clear who the authors are).

    1. I was shocked to find that not all of the journals I review for use double blind review processes! As someone who doesn’t know everyone in my field (I think my field is much bigger than yours) this has led me to find out about some interesting academics, but I think this should be happening through conferences and academic social media, rather than a single blind (or completely open) review process.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: