Wednesday 7 January 2015

Peer reviewers: Does the study have the potential to advance our understanding of this area of practise?

I think practising dentistry in an evidence-based way is tough. There's a lot we have to do about which there's barely any research evidence. Then when we do find research it's often so poor we wish we hadn't spent a few more precious minutes of our lives searching for and then reading it.

Don't you wish editors were less inclined to publish studies that add nothing to our understanding of a clinical problem?

Shortly before Christmas I was sent a request from an open access publisher to peer review an article. Peer reviewing is a necessary part of an academic's life and I take the responsibility seriously. There are up sides to peer reviewing, like having privileged access to some really interesting articles and being able to contribute to improving them before publication. And, of course, the theory is that we all help to keep out the stuff that's not going to help anyone very much.

But peer reviewing an article and making thoughtful and constructive feedback takes time, so it's not something I want to begin doing unless it seems worthwhile. What do I mean by worthwhile?
If the study has no potential then I don't see why I and other colleagues should spend precious time reviewing it.
I consider worthwhile to be a study that has the potential to contribute to the evidence base around the area its relating to. If the study has no potential then I don't see why I and other colleagues should spend precious time reviewing it. We should reject them outright and move on to studies that do have the potential to contribute in some way to our understanding in a given area.

So what, you might ask, does a study have to do to have the potential?

For some areas of practise there is scant research evidence. Let's say nobody has even postulated the idea that a particular way of managing a problem could help improve patient outcomes. In such a situation a case series that shows a benefit from the approach in a handful of patients helps build the case for a prospective comparative study - even a pilot non-randomised one - that could build a case for a fully randomised controlled trial. After all these take time and money and we need to have at least some research to suggest that when the trial is finished there is a plausible chance that the experimental intervention will bring greater benefit to patients than the control.

But where there are already randomised controlled trials that suggest a benefit - or not - something meaningful has to at least build on these. It has to improve on the quality of the existing research and build the size of the meta-sample to help those conducting meta-analyses. We don't want more of the same quality if it has been low up until that time. And we certainly don't want a study that does not even come to the level of what has gone before. We don't want non-randomised trials at greater risk of bias.

The article I was asked to review was a follow up study of just 1 year on 69 patients who had had their class IV dental cavities restored using the atraumatic restorative technique (ART). This isn't a new area of research and this wasn't a randomised controlled trial that would help build the evidence base, but one using historical controls. 
We already have to search through stacks and stacks of rubbish studies to find good evidence.Why add another one?
So I responded to the request from the editor by recommending the article not be considered for publication even before I read more than the abstract. We don't need more non-randomised controlled trials clogging up the medical databases. We already have to search through stacks and stacks of low quality studies to find good evidence. Why add another one?

I have no idea whether the editor of that particular journal will heed my advice and return the manuscript to the authors suggesting that they wait until they have done a well designed randomised controlled trial before resubmitting. I certainly hope that another peer reviewer hasn't been coaxed into spending valuable time reviewing a study that will add nothing over what we already know. Even if the editor does reject it there's no saying whether another editor would do the same.
Does this study have the potential to advance our understanding of this area of practise?
My request is that editors and all those involved in peer reviewing on behalf of editors ask themselves a filtering question before spending precious time reviewing an article: does this study have the potential to advance our understanding of this area of practise?

If a few more of us start rejecting from the outset then perhaps we'll have a little less to read through that adds nothing to our understanding of how best to manage a particular clinical problem. Long live rejection.