Sunday, 8 January 2017

Narrative and knowledge sharing

As a consequence of my research at the University of Oxford, I have become intrigued by the role that stories play in spreading knowledge among dentists. 

We spend a lot of time discussing cases, reformulating ideas of how best to diagnose and manage our patients' problems and recounting past events to ourselves and others. Storytelling is endemic and essential to our development as professionals.
Storytelling is endemic and essential to our development as professionals.
Because of this, my hapless current MSc student is going to use a narrative (or story) I have prepared with a GDP colleague in interviews with other GDPs. The narrative tells the GDP's story of starting to do blood tests on patients with periodontal disease. 

Along with other ways of communicating research evidence (e.g. journals, blogs, podcasts) she'll explore how the GDPs perceive the narrative. Is it more helpful / interesting / communicative when the evidence has been personalized by an individual? Does it help that they explain some of the uncertainties they had in making this change? And how they actually went about putting the change into practice?

My idea long term is to collect many authentic stories of dentists taking research evidence and putting it into practice. So I am keen to engage with other dentists who have taken something that the research suggests would be good to do and tried to put it into practice - even if it wasn't successful. 

The likelihood is that even when people do put research into practice, it gets adapted in ways that the researchers probably never envisaged. But this adaptation is what will make it work in a particular context.

My contact details are available at the Queen Mary Website. Please do get in touch if you are a GDP anywhere in the world with a story to tell.

Wednesday, 7 January 2015

Peer reviewers: Does the study have the potential to advance our understanding of this area of practise?

I think practising dentistry in an evidence-based way is tough. There's a lot we have to do about which there's barely any research evidence. Then when we do find research it's often so poor we wish we hadn't spent a few more precious minutes of our lives searching for and then reading it.

Don't you wish editors were less inclined to publish studies that add nothing to our understanding of a clinical problem?

Shortly before Christmas I was sent a request from an open access publisher to peer review an article. Peer reviewing is a necessary part of an academic's life and I take the responsibility seriously. There are up sides to peer reviewing, like having privileged access to some really interesting articles and being able to contribute to improving them before publication. And, of course, the theory is that we all help to keep out the stuff that's not going to help anyone very much.

But peer reviewing an article and making thoughtful and constructive feedback takes time, so it's not something I want to begin doing unless it seems worthwhile. What do I mean by worthwhile?
If the study has no potential then I don't see why I and other colleagues should spend precious time reviewing it.
I consider worthwhile to be a study that has the potential to contribute to the evidence base around the area its relating to. If the study has no potential then I don't see why I and other colleagues should spend precious time reviewing it. We should reject them outright and move on to studies that do have the potential to contribute in some way to our understanding in a given area.

So what, you might ask, does a study have to do to have the potential?

For some areas of practise there is scant research evidence. Let's say nobody has even postulated the idea that a particular way of managing a problem could help improve patient outcomes. In such a situation a case series that shows a benefit from the approach in a handful of patients helps build the case for a prospective comparative study - even a pilot non-randomised one - that could build a case for a fully randomised controlled trial. After all these take time and money and we need to have at least some research to suggest that when the trial is finished there is a plausible chance that the experimental intervention will bring greater benefit to patients than the control.

But where there are already randomised controlled trials that suggest a benefit - or not - something meaningful has to at least build on these. It has to improve on the quality of the existing research and build the size of the meta-sample to help those conducting meta-analyses. We don't want more of the same quality if it has been low up until that time. And we certainly don't want a study that does not even come to the level of what has gone before. We don't want non-randomised trials at greater risk of bias.

The article I was asked to review was a follow up study of just 1 year on 69 patients who had had their class IV dental cavities restored using the atraumatic restorative technique (ART). This isn't a new area of research and this wasn't a randomised controlled trial that would help build the evidence base, but one using historical controls. 
We already have to search through stacks and stacks of rubbish studies to find good evidence.Why add another one?
So I responded to the request from the editor by recommending the article not be considered for publication even before I read more than the abstract. We don't need more non-randomised controlled trials clogging up the medical databases. We already have to search through stacks and stacks of low quality studies to find good evidence. Why add another one?

I have no idea whether the editor of that particular journal will heed my advice and return the manuscript to the authors suggesting that they wait until they have done a well designed randomised controlled trial before resubmitting. I certainly hope that another peer reviewer hasn't been coaxed into spending valuable time reviewing a study that will add nothing over what we already know. Even if the editor does reject it there's no saying whether another editor would do the same.
Does this study have the potential to advance our understanding of this area of practise?
My request is that editors and all those involved in peer reviewing on behalf of editors ask themselves a filtering question before spending precious time reviewing an article: does this study have the potential to advance our understanding of this area of practise?

If a few more of us start rejecting from the outset then perhaps we'll have a little less to read through that adds nothing to our understanding of how best to manage a particular clinical problem. Long live rejection.

Tuesday, 30 December 2014

Communicating oral cancer risk due to alcohol

A little while ago I tweeted about a systematic review on alcohol and oral cancer risk that the dental elf had flagged up.
But how can be communicate risk in a balanced, non-alarmist, but hopfully behaviour changing way?

In keeping with my previous blog about shared decision-making, patients are entitled to make choices about how they live their lives. In order to do that they may want to have access to some numbers that help them to take a proportionate approach.

In the review of risk of oral cancer the alcohol intake was classified as such:
Light, moderate and heavy drinking was respectively ≤12.5, ≤50 and >50 g per day of alcohol
1 unit of alcohol is the equivalent of 8g, or 10ml, of pure alcohol. (There's a handy guide for those considering their alcohol intake - or wishing to help others to make a change - here. Picture of alcohol units is taken from this)


So roughly the:

  • light intake was up to 1.5 units per day
  • moderate was up to 6 units 
  • heavy was over 6 units per week

The Dental Elf drew attention to the different types of studies included in the review. When only cohort studies were included in the meta-analysis, which are at lower risk of bias, the risk of developing oral cancer was:
  • no greater for the light drinkers, 
  • about one quarter higher for the moderate drinkers and 
  • about 3 times greater for the heavy drinkers. 

The Elf's table reproduced below shows that if case-control studies alone are considered the risks appear much greater. As there is a higher risk of bias in these studies I am inclined to concentrate on the cohort studies alone.


As those of you who are familiar with the work of Gert Gigerenzer, among others, relative risks are not great at communicating risks in a way that allows a proportionate response from a patient yet a lone a clinician.

To say that there is a 3x greater risk of oral cancer if you drink more than 6 units of alcohol a day may sound a lot. But if the lifetime risk of getting oral cancer were, say 1 in 100,000 then having a risk of 3 in 100,000 may not seem terribly significant (assuming there were no other health or social risks to drinking that much).

What we really need to be able to better understand the implications of the increased risk is what the absolute risk of getting oral cancer is.

The closest I could come to this is an overall lifetime cancer risk calculated by the Statistical Information Team at Cancer Research UK. They reckon that 1 in 84 people will develop oral cancer. The number caused by smoking is estimated to be about a half and a third due to alcohol (stats from cancer research). So this number will not be a true reflection of the risk for a non-smoker and non-drinker, which would be expected to be much lower. Can we make use of it though?

My view is that it is not the detailed number that matters when communicating the risk but an indication to a patient of the scale of the impact of their behaviour. So I would recommend making an assumption based on the 1 in 84 figure.

Let's assume that 1/3rd of cancers are caused by alcohol as Cancer Research suggests and that 1/2 are caused by smoking. As a few more are caused by smoking let's assume the average of 1 in 84 is a little lower for drinkers - say 1 in 100. Let's then assume that as about 1/6 of the cancers are in non-drinkers and non-smokers that the lifetime risk in this group is about 1 in 200.

Now we can try to give a sense of the impact of drinking more than 6 units of alcohol by saying:
In non-drinkers we would expect approximately 1 in 200 to get oral cancer. For those who drink more than 6 units per day the number would be 3 in 200.
Now, as we would like to communicate the increased risk for the moderate drinkers of 25% we need to increase the size of the denominator by 4 so that we can say:
In non-drinkers we would expect approximately 4 in 800 to get oral cancer. For those who drink between 1.5 and 6 units a day we'd expect 5 in 800 to get oral cancer. For those who drink more than 6 units a day we'd expect 12 in 800 to get oral cancer.
I hope that this will help communicate oral cancer risk in a balanced way.

Tuesday, 23 December 2014

Health demotion interventions

Oral health promotion 

I am in the process of writing a piece about how to engage general dental practitioners in oral health promotion drawing on theories and empirical research around how to change professionals' practice. The Ottawa declaration defined health promotion as "the process of enabling people to increase control over, and to improve, their health". The scope of those responsible for health promotion is much broader than just the healthcare system and includes governments, social and educational sectors as well as non-governmental organisations.

However, as a clinician with an interest in promoting evidence-based practice that involves shared decision making I look at the "enabling" and "control" bits with fondness. In my previous blog I discussed the use of decision aids and option grids to try and help convey the risks of good and bad outcomes to patients in ways they are more likely to understand. One of the key findings from the Cochrane review on decision aids found that patients were less likely to opt for invasive procedures when they used one.


Oral health demotion  

As I was thinking about all the health promoting things we would like to be able to do (and be effective at - smoking cessation, sugar reduction, oral hygiene improvement) and how we identify the barriers to doing these things, I had in the back of my mind the problem of things we do that not only don't promote health but actually increase the risk of harm. I thought an apt term for this would be health demotion. The sorts of things I am thinking about are: 
  • antibiotic prescribing when there is known to be no benefit (e.g. irreversible pulpitis) that leads to short term side effects and long term resistance that potentially demotes our long term health
  • crowns or veneers for cosmetic reasons only on vital teeth that increase the risk of demoting a healthy tooth to a carious, or worse, non-vital tooth
As with health promotion there will not be health demotion in all cases but the risk of this increases. Of course, particularly with the cosmetic treatments, patients have a right to choose a treatment they think may be beneficial but I wonder, if they were given the risks of harm as part of a decision aid, if they might choose the less health demoting route and, by consequence, promote their chance of retaining healthy teeth for a lifetime. If we don't enable patients by communicating evidence in impartial ways they won't be able to take control over their oral health and will be at greater risk of health demotion.

Thursday, 4 December 2014

Decision aids and option grids in undergraduate teaching

Communicating evidence to patients

There are always obstacles to using best research evidence with our patients to help them make decisions that are right for them. One of these is the actual conversation and decision-making process we engage in with patients. Assuming we have identified research to help quantify the relative benefits and harms of different treatments how do we communicate these? Given that dentists themselves often struggle to use the data they encounter, what hope is there for our patients?

Integrate evidence into practice

My driving force in the programme of evidence-based dentistry that I have developed and sought to integrate in the 5 year undergraduate dental programme at Queen Mary University of London is that evidence has to be taken beyond critical appraisal and incorporated into the clinical decision-making process. As the students enter their third year and find themselves on clinic, engaging with patients in discussions about treatment options, therefore, seemed like the ideal time in our new curriculum to get them to move their learning about EBD in years one and two onto the clinic.

Shared decision-making

The concept of evidence-based practice for me is one that integrates our own clinical experience, the research and the values of our patients. These are made within a particular context that either facilitates or prevents the use of best evidence (limited resources on the part of patients or the healthcare system will always limit what is actually available). Shared decision making is an approach to care that involves communicating the probabilities of positive and negative outcomes associated with different management options (including doing nothing) (Ref). So how do we communicate the risks and benefits of different treatment options in a relatively objective way?

Decision aids and option grids

There is now quite a large body of research looking at the use of decision aids (Ref) that suggests they can improve several aspects of the decision-making process for patients. From this recent Cochrane review:
"There is high-quality evidence that decision aids compared to usual care improve people's knowledge regarding options, and reduce their decisional conflict related to feeling uninformed and unclear about their personal values. 
There is moderate-quality evidence that decision aids compared to usual care stimulate people to take a more active role in decision making, and improve accurate risk perceptions when probabilities are included in decision aids, compared to not being included.
There is low-quality evidence that decision aids improve congruence between the chosen option and the patient's values."

Engaging undergraduates with decision aids

So we got our third year undergraduates to create their own decision aids relating to various clinical scenarios using the best available research they could. Sometimes this means there was systematic review level evidence to help, for others it was at the level of primary studies. We then had them role play using the decision aids with "patients" (this year it was fellow dental students but we plan to employ actors next year). An example of an option grid (see a brief video describing these below) is here:



The process of using these aids in their discussions with the patients seems to have really caught hold. The ease of use - and confidence it gave them in being able to demonstrate the evidence - were seen as very positive.

We had the students in one seminar not give the decision aids they created to their patients beforehand and in the others they did. By the "patients" reviewing the aids beforehand created a very different dynamic that was led much more by the patients' concerns as they picked up the issues they felt were most important to them and the dental student could then explore these in greater detail.

There is a really nice video showing how option grids (very similar to some decision aids and what most of the students created) can be used here:



What is great is that the day after the seminar students were using their option grids with patients they saw on clinic. Now that's getting evidence into practice!

Tuesday, 18 March 2014

UK EBD Teachers' Group inaugral meeting

UK EBD Teachers' Group

Seven representatives of UK dental schools who are responsible for the delivery of Evidence-Based Dentistry learning met online for the first time yesterday.

Nicola Innes from Dundee and myself initiated the formation of the group out of a recognition that there was probably some good practice, learning materials and other resources that could be shared between us and others trying to bring EBD into undergraduate curricula.

The hangout, which included representatives from Liverpool, Manchester, Cardiff, Glasgow, Dundee, and Queen Mary U. London, allowed us to briefly introduce ourselves and to begin sharing common issues about delivering the EBD curriculum. These included: 
  • how much EBD to teach and when
  • how to facilitate or encourage the use of evidence by clinical staff with students
  • sharing online learning resources
Colleagues from Belfast, King's College London, Penninsula, and Sheffield were unable to join the hangout on this occasion but another hangout is planned soon.

If there are any colleagues responsible for delivering EBD that Nicola and I have not contacted please do get in touch. My contact details are available here.

Sunday, 23 February 2014

Educational prescription Prezi

Time moves on and finally our new cohort of undergraduate dental students is preparing for clinical practice. I have thoroughly pummeled them with evidence-based thinking over the past two years and hope that we can carry it on to the clinic, where it belongs.


A while ago I blogged about the educational prescription. Now we get to see how it works with students. To prepare them in my usual "flip" teaching style, I have done a brief prezi with voice-over and thought I'd share it in the interests of furthering the translation of research into practice.

You can see it here.