Monday, 19 August 2013

Qualitative enhancement of Quantitative Systematic Reviews

Some questions need qualitative methods to answer them - even in EBM


In my metamorphosis from EBM quantitative ideologue to more nuanced appreciator of mixed research methods I am learning to re-interpret the value of much of the evidence-based practice I have spent the last several years learning and trying to practice. As many have identified before me (e.g. Black 1994, Popay & Williams 1998), the quantitative research methods upon which clinical and population epidemiology are built can tell only part of the story we need to improve patient care. Here is an extract from the Popay and Williams (1998) paper:
"... depending on the question, qualitative research may be the only appropriate method to be used in finding a valid and useful answer. It is congruent with the philosophy of EBM."
That was from David Sackett, one of the godfathers of EBM, and qualitative research may be defined as "...research that helps us to understand the nature, strengths, and interactions of variables." (Black 1994)

Two different approaches are described to using qualitative research: in a mixed methods way that sees qualitative and quantitative research used alongside each other (or one after the other) in a single research programme; or on its own to answer a question that cannot be answered by quantitative methods. For this blog I will discuss the first - what some authors call using qualitative research as an enhancement.

Flops and failure: what happened and why?


Say that in a trial to change clinicians' behaviour we teach them all the 5 stage approach to using evidence (ask, search, appraise, apply, evaluate). Half the clinicians are randomly assigned to, in addition, be helped do this by a trained facilitator. The other half are taught and then just have to get on with it alone. 6 months later we see whether practice has changed around a set of previously-determined clinical outcomes that the clinicians hoped to make more research-informed. We see no difference in outcome. Facilitators are a flop. What a waste of time, money and research passion. Months of learning, planning, dreaming and hoping come to nothing.

Unless, alongside the trial the researchers had also done some qualitative research - perhaps observed how the facilitators worked with clinicians, how the clinicians engaged (or didn't) with each other / the research / their patients; or perhaps they could have interviewed the clinicians to find out what they thought of the facilitation. There may have been a load of information that the researchers never envisaged could affect the success or otherwise of their perfectly-designed randomised controlled trial. Perhaps the clinicians didn't like the facilitators - found them too abrupt or aloof or "clever". Perhaps the facilitators "didn't get" how to put the research into practice in their setting. Perhaps there were colleagues who gave negative vibes about all this "research stuff" and put them off. If the study were repeated this knowledge may well mean that the new trial is more successful, or perhaps the trial is tweaked to take it into consideration as it proceeds thus preventing wasted time and resources.

Enhancing systematic reviews with qualitative summaries


So today I was thinking about systematic reviews, primarily ones that come up with conclusions that say something like there was 'no / little difference between the interventions'. To me this always seems like a disappointing conclusion to what was likely to have been a long piece of work for the authors. The problem, it seems to me, is that because we only pay attention to the quantitative studies in a review we can't enhance the reader's comprehension of the review using qualitative data. What if there are qualitative studies around the systematic review topic that could indicate why a given intervention might not have worked?

But even for reviews that show a positive effect of, say, a particular behavioural intervention, might it not be more useful to us as clinicians if alongside this were a summary of qualitative studies that looked into what made the intervention successful, why some patients accepted it and others didn't, or what patients and clinicians felt about using it?

Having done one systematic review, and working currently on another, I will grant that as someone steeped in quantitative methodology incorporating qualitative data too would seem to be hard work. But if instead those with quantitative and qualitative expertise were to work together might we not enrich the evidence summaries that clinicians and patients consume?

Fortunately, it seems that the Cochrane Qualitative and Implementation Methods Group has begun work on helping reviewers do just this and the Centre for Reviews and Dissemination incorporate a whole chapter for this in their guidance on conducting a systematic review (see chapter 6) where they write:
"This chapter focuses on the identification, assessment and synthesis of qualitative studies to help explain, interpret and implement the findings from effectiveness reviews."
I haven't found a review that does this yet and don't know how I'd go about identifying one in an efficient way (perhaps a filter for combined Qualitative/Quantitative reviews might be created in PubMed?) but I look forward to someone pointing me to one soon and to more reviewers looking to incorporate the qualitative with the quantitative.

References


Black N. Why we need qualitative research. Journal of Epidemiology and Community Health. 1994;48:425–6.
Popay J, Williams G. Qualitative research and evidence-based healthcare. Journal of the Royal Society of Medicine. 1998 Jan;91 Suppl 35:32–7. 

No comments:

Post a Comment