Thursday 19 December 2013

Flip teaching critical appraisal using Prezi

Flipping critical appraisal for dental undergraduates

Flip teaching

I am designing and implementing a new Evidence-Based Dentistry (EBD) curriculum for dental undergraduates at Barts and The London School of Medicine and Dentistry, QMUL, and am using blended learning or "flip teaching".

The idea is to give students learning material and tasks to do in their own time and to come to seminars full of questions and problems that we then work through. I am trying this with critical appraisal and have thought I would exploit Prezi's ability to use voice recordings to make a step-by-step presentation that students can follow at their own pace.

Creating a prezi with voice recordings


I have to say that this was time-consuming - it took 5 days of annual leave to put together - and of course I wait to see the response from the students. But I hope to be able to use the material also with staff as I try to help them feel comfortable and confident with EBD so the time spent may become relatively insignificant.

The plan is that students watch this presentation, which uses a report of a randomised controlled trial, and then - on our virtual learning environment - there is a paper to critically appraise themselves, ready to present and discuss in the seminar.

Last year with the same cohort of students I used pre-recorded screen casts that lasted no more than 10 minutes but I like the idea of students being able to go back and forwards at their own pace, which is what I have been able to do here.

Given the subjectivity of critical appraisal I wouldn't expect everyone to agree with mine. The intention is to teach the students the mechanism or process of critical appraisal - not to say that this particular critical appraisal is an ideal :-)

You can make the presentation below full screen by clicking the icon in the bottom right hand corner after you have clicked "Start Prezi".


Wednesday 27 November 2013

Knowledge exchange in dental practice

Knowledge exchange in dental practice

My DPhil research out here at Oxford has taken me into (for me) some fascinating literature about the nature of knowledge and research. Having begun with what I now realise was a naive question - how to help clinicians practice more evidence-based dentistry - I have moved towards trying to understand better how research knowledge does or could fit in alongside (or blend with) the many other sources of knowledge that we all employ in our daily decision-making.

What is knowledge?

But first, what do I mean by knowledge? Here are a couple of definitions that inform my broad concept of knowledge:
  • Collation of information for a particular purpose, intended to be useful (Sensky 2002)
  • Facts, information, and skills acquired through experience or education (Oxford Dictionaries)
  • Awareness or familiarity gained by experience of a fact or situation (Oxford Dictionaries)
Research is one form of explicit or codified knowledge. The research may be quantitative or qualitative and be assessed as robust or not, and relevant or not. But what Polanyi and Nonaka (and others) suggested is that this explicit knowledge first requires tacit knowledge. We don't consciously sit down and appraise what we know or don't know unless, perhaps, we're sitting an exam and realise we can or can't answer a question. Instead we perceive that we need to find out about something before we realise we need to know about X. 

Nonaka suggested that we externalise our tacit knowledge using metaphors or stories, combine that explicit knowledge with other sources of information (perhaps a colleague's experiences or some research), then we internalise it, making our own sense of the combined information and using it in a way that we may find difficult to describe. 

This is in keeping with other theories - particularly adult learning theories - that recognise we are not empty buckets waiting to be filled with knowledge but individuals with rich experiences who absorb, adapt, reject, or accept then reject new knowledge.

Gabbay and Le May found that General Medical Practitioners in a "good" practice used many other sources of knowledge than just research to make patient-centred decisions. Those other sources included colleagues, opinion leaders, their own experience, sales reps and patients. Estabrooks (2005) found that the majority of time nurses learnt their clinical practice knowledge from social interactions and experience, with a very small component coming from consulting research.

What influences knowledge use?

From personal experience this is not surprising. But if we think that research use is likely to be beneficial to patients then we may need to understand better the existing ways in which knowledge is taken up and exchanged. My reading has now included several traditions outside of the healthcare literature. Here are some examples of what they think influences explicit or research knowledge use:
From healthcare literature:
  • Context
  • Nature of the knowledge
  • Process of knowledge exchange
From management literature:
  • Professional boundaries
  • Organisational learning (including tacit learning)
From economics
  • Competition for the potential user’s time / attention
From psychology
  • Individual behaviour and motivation

What to do?

The way in which knowledge is used is probably a lot more chaotic than many in the EBD/EBM world would like to think. This is where I began my DPhil - thinking that I could develop some kind of intervention to help dentists use research more in the traditional linear approach - ask, find, search... But now I realise we don't understand enough of how knowledge is exchanged within the primary dental care world, and how the mechanisms that allow other sorts of knowledge to move about might be used to help research knowledge do the same. 

And so I move on to study how and why given contexts and mechanisms influence dentists' uptake of knowledge before trying to help them use research with all their other knowledge sources.

References

Oxford Dictionaries: http://www.oxforddictionaries.com/definition/english/knowledge

Estabrooks, C. A., Rutakumwa, W., O’Leary, K. A., Profetto-McGrath, J., Milner, M., Levers, M. J., & Scott-Findlay, S. (2005). Sources of practice knowledge among nurses. Qualitative health research15(4), 460-476.

Polanyi, M., & Sen, A. (1983). The tacit dimension (pp. 21-25). Gloucester, MA: Peter Smith.

Sensky T. Advances in Psychiatric Treatment. 2002 September 1, 2002;8(5):387-95

Monday 16 September 2013

Presenting Evidence on Clinical Topics

How to do a presentation on EBD and Endodontics

A former student contacted me yesterday to ask if I could help provide some guidance on how to go about doing a presentation on EBD and endodontics. Where should he start, he asked and how should he go about it?

So I thought I'd blog a response in case anyone else has to do an EBD presentation on some aspect of dental care.

A little bit like doing endo itself, preparation is key to a successful outcome. So it is with a presentation on EBD. We may only have 10 or 20 minutes to present what we have found but probably several hours will be needed to deliver something that is informative and, importantly, backed by the most up to date research.

My first question back to my new colleague is how broad he would want to be in addressing the topic. Simply being asked to "present on EBD and Endo" unsurprisingly caused him to panic a little. The field is huge so what would he present on?

Questions

So, as with a research topic or any literature search, developing a clear idea of a clinical question to present evidence on would be my first step. Using the PICO structure (see a blog explaining this here and a prezi here) I would think about whether I was interested in a question about:

  • diagnosis (e.g. how sensitive are tests for non-vitality?) 
  • prevention (e.g. how effective is partial caries removal compared to full caries removal in preventing irreversible pulpal damage?)
  • treatment or interventions (e.g. is one stage endo better than two stage?)
  • prognosis (e.g. what is the success of re-treatment over 10 years?)
  • patient or practitioner experience (e.g. how did clinicians get on with using a particular technique for obturation?)
Sometimes it takes a while to decide how broad / narrow you want to be and sometimes only after you have begun to 'scope' the literature do you get a sense of how much research there is likely to be to help answer your question.

At this stage it is also worth thinking about the best type of study or studies to answer the question. For non-complex interventions a systematic review of randomised controlled trials or the trials themselves may be most appropriate. For a prognosis question a cohort study that follows patients with a particular condition over a period of time could be suitable (or indeed one arm of a controlled trial). For a question about experience and values a qualitative study design could be best. The point is that we shouldn't concentrate only on RCTs when looking for evidence as they are not always the only or best way to answer certain questions. You can find some guidance on the best types of studies to answer questions here.

Search

The next stage, then, is to look for the research evidence. There is a growing recognition that we need to get better at recognising and being critical of non-research evidence too - particularly our own experience and the views of our patients - and of combining these in an optimal way (we're still working on it...). 

But we need to search the various medical databases and search engines to find the research evidence first. An efficient way to do this is to look first of all for summaries of evidence, such as guidelines, and systematic reviews. If we can find one that is up to date and relevant to the question we asked then we need not look for primary studies. So I would recommend beginning with the Cochrane Library for reviews or the National Guideline Clearinghouse for guidelines. You might search the EBD Journal website too to see if there are any commentaries on research there or the ADA Center for EBD.

Given the time - often a couple of years - to complete a review or guideline, anything that is more than a couple of years old is probably out of date since the most up to date research they include may by then be 4 years old. So if the review seems old, irrelevant or there simply isn't one, then we need to look for primary studies. PubMed is an open access medical database that allows this. There are a couple of helpful introduction videos by my colleagues at Oxford here and here.

As you become more familiar with PubMed you can limit the number of results you use by using filters for systematic reviews or randomised controlled trials. A video explains about this here. The advantage is that we can cut the number of articles we have to look through from hundreds or thousands to maybe dozens or less.

Of course, you could also ask the excellent library staff at the British Dental Association to do a search for you. As with your own search it's best to have a  clear question to give them or they may end up searching for things you're not interested in. This service is free to members of the association. The BDA also houses collections of papers on over 500 topics at their London site and these can be posted out to members at no charge.

Accessing the research you find

One of the biggest problems we face is that much of the research out there sits behind a pay wall and few of us are willing to fork out $25 to read a paper that may be irrelevant or of poor quality. I have blogged here about this problem. Again, the BDA can help out but at a cost of £2.50 per article. Unfortunately, my experience with Athens is that this provides minimal access to relevant journals. This is why up to date Cochrane Reviews are so valuable to us as they're free to anyone in the UK and several other countries (see here if you're not sure).

Get critical

Not all research is equal in terms of its validity. If you manage to find a systematic review there should be an indication of the quality of the primary studies included. There are various schemes for this and Cochrane now use GRADE criteria that assesses the research to be of very low to high quality. But if you're reading the primary studies yourself a checklist such as those produced by the CASP organisation are helpful to quickly get a sense of the methodological quality of a study and its usefulness to you.

My personal view is that any CPD presentation ought to indicate the quality of the evidence being presented. Normally on a course there is a mix of personal and research experience and I think that we are entitled to know which is being used. Likewise, if we are to do a presentation of the evidence-base for different topics around endo then the audience should be given a summary of how strong the evidence is. After all, why go through the cost and time of changing one's practice if the only research suggesting you should is of a very low quality? The quality of the evidence should determine whether or not we consider implementing it.

Implementation

One of the areas in EBD that is most complex is how to go about implementing change based on high quality evidence. It is recognised that most of us transform research findings rather than implementing them as reported in the research. I think that in a presentation it would be helpful to discuss what the barriers are to changing practice and how we might go about reducing these. Perhaps we need to compromise on some element of the protocols suggested by the research to make it practical and cost-effective in our practice. Perhaps we need to think about forming a group to keep each other motivated as we seek to change practice as most of us are very poor at changing what we do on our own. 

I would include these in any presentation as EBD is useless unless this important step is achieved.

I won't talk about presentation skills here - there are many much more gifted in those than me but I take inspiration from Steve Jobs who rarely used script and stuck to simple messages with plenty of graphics to enthuse the Apple-lovers out there.

Happy EBD presenting :)

Wednesday 21 August 2013

Why quantitative studies cannot deliver evidence-based practice alone

Qualitative methodology is pants and has no role in evidence-based practice


It's not uncommon to share a room with a colleague who is repelled by the idea that qualitative research could contribute to improving patient care. There are many more (and I was one) who just don't get where qualitative research fits in and it seems to me that the evidence-based practice (EBP) movement, in some cases deliberately, in others not, has fostered an ideal. And that ideal is quantitative.

For the study of the efficacy (how well it worked in the study) and, indeed, the effectiveness (how well it worked in practice), of a drug the randomised controlled trial with its quantitative output of numerical data on its success or otherwise in treating a given condition is the ideal and I would not argue otherwise. For the limited context and the restricted set of patients in which the trial is conducted, if well done, it will allow some estimate of the "truth" of the efficacy of the treatment. At least for the outputs being measured.

But evidence-based practice is about much more than a risk or odds ratio and p-values or confidence intervals. These numerical - quantitative - outputs are but a tiny element of what I understand as evidence-based practice (EBP).

Rather than look at qualitative studies per se e.g. "how dentists use or don't use  evidence in their practices" (rather than "how many use evidence"), for this blog I just want to draw attention to the way we use qualitative methods to deal with quantitative data in EBP.

Or perhaps it does...


Since the early days of EBP there has 1) been a need to consider the patient's values and aspirations 2) the need to consider our own experience and expertise and 3) a requirement to critically appraise the literature we read. Let's not forget, of course, that there's also been a requirement to use the best available research to inform the discussion.

So if I have a study that tells me that putting a stainless steel crown on a carious deciduous tooth rather than filling it with a composite will result in 12% fewer such teeth needing to be extracted I am grateful for this quantitative information on the efficacy of the intervention. I need this to understand what the potential benefit of using it in my patient could be from the point of view of losing a tooth.

Qualitative critical appraisal



However, in order to evaluate the risk of bias - that is, the risk that the result is not the true reduction in tooth loss due to some systematic error in the design of the study - I would critically appraise it. The thing is that there don't seem to be reliable quantitative ways of doing this. We can score whether the two groups were "randomised" or not - perhaps with a 1 for yes and a 0 for no - but very quickly we ask - how were they randomised and what effect does it have if they don't tell us? We might see a table of baseline characteristics and there's a difference in the baseline amount of caries in the average child in each group - but what does that mean for the results? Perhaps the p-value is 0.04 or perhaps it is 0.004 - how do these different confidences in the estimate of truth affect the way we think about the results?

These are not questions that can be answered reliably quantitatively. In a sense we are analysing the text - the report of a study - to try and construct some idea of what it means. Does this explanation mean this is likely to be a reliable study or not? And this, I would argue, is a qualitative process: we are constructing an idea in our head of whether we think the story the report tells is likely to be the truth or not. Someone else could well construct a different opinion that is contrary to ours. How many times have you read in systematic reviews that disagreements were resolved through consensus or by a third reviewer?

Qualitative understanding of patient values and clinical experience and expertise


What about the other two essential elements of evidence-based practice - the patient's values and our experience and expertise? Here again it is hard to see how we can avoid using qualitative methods and where quantitative methods fail.

Contrary to the positivist "truth" from the study, for a patient, the truth of what is - for want of a better term - in their interests and meets their values and aspirations - could be very different. Perhaps the outcome of the study is not the outcome that interests them. Or perhaps, even if it is, they ascribe a different value to a tooth only lasting 1 year rather than 5.

Likewise, the truth for the clinician about the effectiveness of the treatment may be vastly at odds to the researchers' results as they try to run a small business, manage a clinic, decide which hands-on courses to attend (and which not to), and make sense of their colleagues' opinions about the research, its value, their experience using the treatment...

The issues of why people do things and what drives them to or not to are inherently qualitative and as clinicians trying to practice in an evidence-based way we make decisions in this way each day.

So I guess my conclusion here is that as we teach and train colleagues and students to practice EBP that we not forget the essential component that qualitative methods play in making sense of quantitative data and helping us use it where it is appropriate. As we move forward we may want to think of how we develop some rigour in this process as the various tools for critical appraisal have sought to do. 

Monday 19 August 2013

Qualitative enhancement of Quantitative Systematic Reviews

Some questions need qualitative methods to answer them - even in EBM


In my metamorphosis from EBM quantitative ideologue to more nuanced appreciator of mixed research methods I am learning to re-interpret the value of much of the evidence-based practice I have spent the last several years learning and trying to practice. As many have identified before me (e.g. Black 1994, Popay & Williams 1998), the quantitative research methods upon which clinical and population epidemiology are built can tell only part of the story we need to improve patient care. Here is an extract from the Popay and Williams (1998) paper:
"... depending on the question, qualitative research may be the only appropriate method to be used in finding a valid and useful answer. It is congruent with the philosophy of EBM."
That was from David Sackett, one of the godfathers of EBM, and qualitative research may be defined as "...research that helps us to understand the nature, strengths, and interactions of variables." (Black 1994)

Two different approaches are described to using qualitative research: in a mixed methods way that sees qualitative and quantitative research used alongside each other (or one after the other) in a single research programme; or on its own to answer a question that cannot be answered by quantitative methods. For this blog I will discuss the first - what some authors call using qualitative research as an enhancement.

Flops and failure: what happened and why?


Say that in a trial to change clinicians' behaviour we teach them all the 5 stage approach to using evidence (ask, search, appraise, apply, evaluate). Half the clinicians are randomly assigned to, in addition, be helped do this by a trained facilitator. The other half are taught and then just have to get on with it alone. 6 months later we see whether practice has changed around a set of previously-determined clinical outcomes that the clinicians hoped to make more research-informed. We see no difference in outcome. Facilitators are a flop. What a waste of time, money and research passion. Months of learning, planning, dreaming and hoping come to nothing.

Unless, alongside the trial the researchers had also done some qualitative research - perhaps observed how the facilitators worked with clinicians, how the clinicians engaged (or didn't) with each other / the research / their patients; or perhaps they could have interviewed the clinicians to find out what they thought of the facilitation. There may have been a load of information that the researchers never envisaged could affect the success or otherwise of their perfectly-designed randomised controlled trial. Perhaps the clinicians didn't like the facilitators - found them too abrupt or aloof or "clever". Perhaps the facilitators "didn't get" how to put the research into practice in their setting. Perhaps there were colleagues who gave negative vibes about all this "research stuff" and put them off. If the study were repeated this knowledge may well mean that the new trial is more successful, or perhaps the trial is tweaked to take it into consideration as it proceeds thus preventing wasted time and resources.

Enhancing systematic reviews with qualitative summaries


So today I was thinking about systematic reviews, primarily ones that come up with conclusions that say something like there was 'no / little difference between the interventions'. To me this always seems like a disappointing conclusion to what was likely to have been a long piece of work for the authors. The problem, it seems to me, is that because we only pay attention to the quantitative studies in a review we can't enhance the reader's comprehension of the review using qualitative data. What if there are qualitative studies around the systematic review topic that could indicate why a given intervention might not have worked?

But even for reviews that show a positive effect of, say, a particular behavioural intervention, might it not be more useful to us as clinicians if alongside this were a summary of qualitative studies that looked into what made the intervention successful, why some patients accepted it and others didn't, or what patients and clinicians felt about using it?

Having done one systematic review, and working currently on another, I will grant that as someone steeped in quantitative methodology incorporating qualitative data too would seem to be hard work. But if instead those with quantitative and qualitative expertise were to work together might we not enrich the evidence summaries that clinicians and patients consume?

Fortunately, it seems that the Cochrane Qualitative and Implementation Methods Group has begun work on helping reviewers do just this and the Centre for Reviews and Dissemination incorporate a whole chapter for this in their guidance on conducting a systematic review (see chapter 6) where they write:
"This chapter focuses on the identification, assessment and synthesis of qualitative studies to help explain, interpret and implement the findings from effectiveness reviews."
I haven't found a review that does this yet and don't know how I'd go about identifying one in an efficient way (perhaps a filter for combined Qualitative/Quantitative reviews might be created in PubMed?) but I look forward to someone pointing me to one soon and to more reviewers looking to incorporate the qualitative with the quantitative.

References


Black N. Why we need qualitative research. Journal of Epidemiology and Community Health. 1994;48:425–6.
Popay J, Williams G. Qualitative research and evidence-based healthcare. Journal of the Royal Society of Medicine. 1998 Jan;91 Suppl 35:32–7. 

Thursday 27 June 2013

Time to stop wasting data: The Clinical Practice Research Datalink

Let's stop wasting data

I was speaking at an event last night for general dental practitioners who are involved with, or interested in joining, studies in dental practice. One of my fellow speakers was a very engaging speaker called John Parkinson, who is director of the The Clinical Practice Research Datalink (CPRD) about which I will speak a little later.

Diagnostic codes

A little while ago I blogged about how useful it would be if dental electronic health records were to incorporate diagnostic codes to allow tracking of patient outcomes over time. The potential to observe the success or otherwise of patient management could be huge. Not only could we observe outcomes to routine treatment but potentially also conduct trials in practice using the codes to track what problems patients arrive with, any interventions we randomise them to, and the outcome.

So, for example, one of the audience last night asked the question: in a time-restricted environment is it better to provide oral hygiene advice or do a quick scale and polish, presumably for a patient with gingivitis rather than destructive periodontal disease. Well, using the SNOMED codes that I blogged about at the first encounter a dentist would record the following from a drop down box in the patient's record:

Code 66383009 Gingivitis (disorder)

Then the clinician records how it was managed. This could be as part of a randomised controlled trial - half patients are randomly allocated to scale and polish, half to oral hygiene instruction:

Code 234696006 Scale and polish teeth (procedure)

or: 

Code 58707002 Oral hygiene education (procedure)

Then, at the next visit the dentist, hygienist or therapist records whether gingivitis is present using the same code:

Code 66383009 Gingivitis (disorder)

If this data - that we probably record for thousands of patients every day anyway - were collected in this organised way then there would be a massive amount of potentially really useful data that we could use to answer the question asked by our colleague last night.

Clinical Practice Research Datalink

This is where the The Clinical Practice Research Datalink comes in. John Parkinson already churns the numbers for general medical practices using data from the last 25 years. What they can now do is to identify potential participants in research based on their medical conditions and create a pop-up that asks the clinician to ask the patient if they would be happy to take part in a trial. If the patient consents then they are randomised automatically to one treatment or another.

So imagine this as a dentist participating in a trial to find out whether you should give oral hygiene advice or a scale and polish in your short appointment: 
  • a patient attends for a dental examination. You diagnose gingivitis and click the appropriate code (as above). 
  • A pop-up screen asks you to ask the patient if they would like to participate in the trial. They consent and you click the appropriate button.
  • The pop-up then tells you that for this patient you should give oral hygiene advice. So that's what you do.
  • Then you see the patient again in 6 weeks / 6 months / 12 months and without even having to think about the trial you just record what you see. If there's gingivitis then you record it. If there isn't you don't.
  • The number crunchers process the data and in a year or two we have a massive data set that allows us with some confidence to say whether one or the other is more effective when done alone.

So why aren't we doing this?

This effortless utilisation of information for the betterment of our patients is only likely to happen if we can get dental electronic health record systems to adopt the coding that enables us to track the problems patients present with, the interventions that we use, and the outcomes some time afterwards. John Parkinson wants CPRD to facilitate this - he just needs access to the data we have on our computers. And whilst this has generally been an NHS programme because GPs are almost all in it, it doesn't mean private colleagues without NHS contracts couldn't also be joining the network. 

The CPRD could also link our records to general health records. So potentially we could research whether, say, giving dietary advice in the dental clinic results in wider benefits to the patient - like reduction in obesity-related diseases, for example. And we could also track the outcomes of treatment completed in, for example, dental hospitals once they have been discharged back into the care of a general practitioner so allowing us to understand better the long term impact of hospital dental care.

As a dental profession I think we really need to join our medical colleagues and engage with this great opportunity and we need to start by persuading the dental systems to play ball. I don't know who the best people are to help with this...but perhaps you do.


Sunday 9 June 2013

Respecting tacit knowledge in knowledge transformation

I used the term knowledge transformation in the title of this blog instead of the common terms knowledge translation or knowledge implementation because, as Morgan Meyer and others have argued, knowledge is rarely used in its original sense.

In a previous blog I discussed Gabbay and Le May's work in general medical practices that led them to understand that doctors create their own "mindlines". I have also read more recently about the SECI model proposed by two Japanese researchers in the 1990s, Nonaka and Takeuchi. They were studying the creation of knowledge within commercial organisations and sought to include the tacit knowledge we all gain through doing and observing but which may be hard to explain, with the explicit knowledge shared through discussion and reading.

This latter model reinforces what Gabbay and Le May found, that doctors' knowledge developed not only through the use of explicit information in the form of guidelines but also the tacit knowledge gained through observing how each other worked and their own interactions with patients and people or organisations outside their practice.

The SECI model stands for: socialisation, externalisation, combination and internalisation. [The diagram here is reproduced from the Gabbay and Le May book].

The idea is that we learn tacit knowledge through a process of socialisation. This involves us observing others working. In an organisation where knowledge creation takes place, they argue, this tacit knowledge needs to be made explicit. This is done by individuals using metaphor and analogy to explain their experiential learning. In turn the externalised knowledge is combined with other information sources.

From my point of view, looking at how to help clinicians use research in decision-making, this includes using research but also using other information sources. The combined knowledge might then be expressed as a lecture, a practise protocol or a guideline generated from this process. Finally, we internalise the combined knowledge as we make sense of it and, if we use it, introduce it in some form into our practise.

Much of the knowledge translation world has ignored, it seems to me, not only the other sources of information that go into making a decision but also the tacit knowledge we all have that may never be expressible. If we take account of pre-existing knowledge - and value this as much as the scientific knowledge we gain - we take a constructivist philosophical view. If we take the view that only scientific knowledge gives us true knowledge then we take a positivist view.

I have moved progressively from the positivist to the constructivist over the last few months as I have learnt more about the way we actually function. It seems to me that even if we wanted to perform every activity according to a positivist stance that everything can be explained by some physical or social law, we would never succeed because we inherently value our own knowledge and experience.

Moving forward, my feeling is that if we - those interested in helping useful and appropriate research to be incorporated into clinical decision-making - are to serve any function then we need to  respect more the complex decision-making and knowledge creation that all clinicians (and human beings generally) are engaged with daily.

Monday 22 April 2013

Mindlines and Evidence-Based Practice

When I began my DPhil at Oxford I proposed a research idea that would look at the system within which dentists work and develop an intervention to help increase the use of evidence-based practice. I was thinking of the now ubiquitous 5 stages of ask, search, appraise, apply, evaluate.

However useful these skills, though, I realised pretty early on in my reading that helping dentists to do this regularly would be extremely challenging - the barriers to using evidence-based practice have been well documented (e.g. Cabana, M. D. et al. 1999, Zwolsman, S. et al. 2012) and if one thinks at the organisational or systems level, the complexity of helping useful innovations get into daily practice can be immense (Greenhalgh, T. et al. 2004).

The implementation science research field tries to get knowledge into practice often using psychological and behavioural theory to inform (e.g. Michie, S. et al. 2005, Grol, R. P. et al. 2007). But something always felt a little like we as clinicians were being required to do something by a higher authority and somehow this didn't feel very democratic. To me someone trying to implement a set of externally-justified guidelines that I had little part in designing felt inappropriate. I realised that this might be framed not so much as democratic but perhaps as an affront to our individual clinical decision-making that, according to evidence-based practice principles, required that the best evidence be used alongside our own clinical expertise and the values, wishes and aspirations of our patients (Sackett, D. L. et al. 1996).

I then came across John Gabbay and Andree le May's work on what they termed mindlines (Gabbay, J. & le May, A. 2004). They published a report into a study in a couple of general medical practices in the UK where they observed how well-reputed clinicians made decisions. Very few referred to evidence-based guidelines in consultations with patients but rather, they observed, they built up their own flexible and constantly-moderated internal guidelines or "mindlines". These took into account not only the research evidence but the practical issues of applying it, their own experience of it, the views of trusted colleagues, the local services available, time, etc. The observed reality was that decision-making was largely influenced by tacit knowledge rather than the explicit, codified form delivered in guidelines.

The authors of that study published a book in 2011 (Gabbay, J. & le May, A. E. 2011) that describes the original and related work in detail but that deepens the discussion by looking at other research and theories. Fundamentally they conclude that "the assumptions of the EBP movement are misguided" by assuming that if we somehow put research knowledge into the clinical situation that it will be transferred into practice. They argue that one has to be aware of the complex social and psychological processes that need to be altered if one is to use more research in practice. They argue that the uptake of knowledge needs to be understood more at the social and organisational level rather than concentrating on the individual use of research evidence.

This book makes sense to me as a clinician and as a researcher. I recommend it as a great read for anyone involved in trying to facilitate the increased use of research in clinical decision-making. If you do read the book don't leave out the last chapter - it contains some excellent pointers on how we might further the research on helping clinicians and patients benefit from using research and non-research knowledge to improve care.

References

CABANA, M. D. et al. 1999. Why don't physicians follow clinical practice guidelines? A framework for improvement. JAMA, 282, 1458-65.
GABBAY, J. & LE MAY, A. 2004. Evidence based guidelines or collectively constructed "mindlines?" Ethnographic study of knowledge management in primary care. BMJ, 329, 1013.
GABBAY, J. & LE MAY, A. E. 2011. Practice-based evidence for healthcare : clinical mindlines, Abingdon, Routledge.
GREENHALGH, T. et al. 2004. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q, 82, 581-629.
GROL, R. P. et al. 2007. Planning and studying improvement in patient care: the use of theoretical perspectives. Milbank Q, 85, 93-138.
MICHIE, S. et al. 2005. Making psychological theory useful for implementing evidence based practice: a consensus approach. Qual Saf Health Care, 14, 26-33.
SACKETT, D. L. et al. 1996. Evidence based medicine: what it is and what it isn't. BMJ, 312, 71-2.
ZWOLSMAN, S. et al. 2012. Barriers to GPs' use of evidence-based medicine: a systematic review. Br J Gen Pract, 62, e511-21.

Sunday 7 April 2013

MOCOLS: Massive Online Communities of Learners

Many of you will be aware of the term MOOCS - Massive Open Online Courses - and depending on who you listen to they're either going to sweep away university education as we know it or pass on by like many other fads.

But what I have become aware of is how many dentists and students I communicate with online either via   this and other blogs, through the videos I've done on YouTube, my Prezis and through Linked-In and Twitter. Very quickly one engages with others we've never encountered before and can in a flash share some knowledge or learning. For many dentists working on their own in dental practice this exchange may be uncommon but there is evidence that clinicians who interact with others tend to be more up to date.

One common way of doing this is through something called a "community of practice". This may be just a few people who share a common goal and try to find some better way to achieve it.

So I was just struck that we could be moving toward developing the idea of communities of practice in combination with that ease of access to each other provided by the internet. The time may well be ready for MOCOLS: Massive Online Communities of Learners. 

It needs some development, though...

Sunday 31 March 2013

A PICO Venn Diagram

PICO as a Venn Diagram

I created this Venn diagram to demonstrate how the different components of PICO overlap. As we look for studies we aim to find those that have as many of the components we are interested in as a possible. 

In this Prezi you can control where you go by clicking on different overlapping parts.


Saturday 16 February 2013

Diagnostic codes for GDPs in the UK

Diagnostic codes

Diagnostic codes are a means to record the findings and diagnoses for patients in a standardised manner and have been used in electronic health records for some time in the medical world. But dentistry has lagged behind with the only items being recorded those that are related to treatment.

Why record diagnostic codes? 


If we want to assess the suitability of patient management decisions - whether we are talking about diagnostic tests, prevention regimes, treatments or prognoses - we need to know what the decisions relate to.

Say a dentist writes in their notes that they have prescribed Amoxicillin 500mg TDS for 5 days and writes nothing else. We have no idea whether this is the appropriate way to manage whatever it was the patient walked in with. We need to know what diagnosis led to this prescription. Some dentists prescribe only when there is a risk of spreading infection or when incision and drainage of an abscess is impossible. Others prescribe when a patient has an irreversible pulpitis without opening the pulp chamber despite the lack of evidence to support this approach.

So we generally expect that a dentist will record a presenting complaint, the results of any investigations and their diagnosis. This is better note keeping but because we each come up with different terms and have different short hand ways of writing our notes it can be difficult to gather comparable information from many dentists.

So diagnostic codes have developed as a means to standardise the diagnoses given by clinicians to a limited set of terms. International organisations have come together to map these terms in different languages too, so that international comparisons can be made.

So what?



Well, it means that one can audit the practice of many hundreds of clinicians using computer records rather than wading through text trying to understand what people have done. QResearch in the UK now gathers data from 600 general medical practices in the UK, anonymises the data and allows researchers to assess, for example, the number of patients being prescribed the best medication for a given condition. This is possible because all the practices use the same set of terms for their diagnoses and for the treatment they prescribe. My understanding is that all the medical practice management systems use one terminology or another.

Unfortunately, though, the dental software companies in the UK don't seem to be doing this apart from EMIS Dental. (If I'm wrong please tell me as I'd love to know). A group of US dental schools are developing and using a terminology set and a recent conversation with a visiting American dental student suggests this is a routine of his practice.

International Health Terminology Standards Development OrganisationSNOMED CT is the international terminology that last year incorporated the American Dental Association's dental terminology. This presentation identifies some others being developed in the US. If you want to have a look at how SNOMED CT works there are various free browsers that let you see, such as SNOFLAKE. The terminology can be incorporated into any software with a licence which, I believe, is free in the UK as a member of the International Health Terminology Standards Development Organisation.

I find this quite exciting - if we could persuade dental software companies in the UK to use the codes. It would mean that our dental electronic health records could be used to easily audit our own practice and to potentially create a massive research data set from general dental practice. This in turn could form the basis of large studies to inform us about the prognosis of teeth, the success or otherwise of common interventions and the efficacy of prevention when delivered in primary dental care.

A call to action


The NHS Connecting for health will require SNOMED CT to be part of any electronic health record used in NHS practice. I believe we should go further in dentistry as much of our work is being conducted outside of the NHS. So I call on UK dentists to begin pestering software developers to incorporate SNOMED CT into their systems and allow general dental practitioners to contribute not only to their own practice improvement but to more meaningful research conducted in primary dental care.


Sunday 13 January 2013

Number Needed to Educate (NNE)

As someone who teaches both general clinical dentistry and, increasingly, evidence-based dentistry to postgraduates and undergraduates, I have been interested in how we actually measure the effectiveness of our teaching. But I have recently become very interested in how we as clinicians are encouraged to move toward more evidence-based practice and how as postgraduate educators we facilitate that in the people who attend our courses.

There's a group within the Cochrane Collaboration called the Effective Practice and Organisation of Care Group (EPOC) and these people have produced a number of fascinating reviews about how effective (or ineffective) different interventions are in producing a change in professional practice of healthcare workers. Here's a biased sample that have interested me:


Now, the meta-analyses that have been conducted in these reviews result in the authors giving a summary risk difference. This is the absolute difference in the risk of a professional changing behaviour when they are subjected to, say, a CPD course compared to the risk of them changing their behaviour if, say, they didn't attend the CPD course. Granted, it's a bit odd to describe the chance of converting to a positive behaviour as a risk but that's the way it is. 

So what's interesting to me is that in the systematic review on educational meetings above there was a risk difference of 6% overall of a positive change in practice if people attended educational meetings. But when they looked at studies that tried to change complex behaviour this came down to 0%.

Now I've been thinking that if we were to treat risk difference (which is absolute) like absolute risk reduction we'd be able to determine a 'Number Needed to Educate' (NNE) that would result in a single change in practice.

You may be familiar with NNT (number needed to treat), which I have blogged about previously. Basically it describes for interventions the number of patients we would need to treat with a given treatment before we'd expect one positive result. The lower the NNT the more effective the treatment. 
"17 people sit for a whole meeting about a best practice and yet only one of them changes their behaviour."

The NNT is determined by dividing 1 by the absolute risk reduction. So if the absolute risk reduction is 0.1 (or 10%) then we'd need to treat 1/0.1=10 patients to see a benefit.

So could we start to do the same with different ways of changing our behaviour to one that is more evidence-based? If educational meetings have a risk difference of 6% overall, then the NNE (Number Needed to Educate) would be 1/0.06=17 (rounded to nearest whole number). So, get this, 17 professionals would have to sit through an educational meeting for one of them to change their practice to what is considered best practice. That's astounding. 17 people sit for a whole meeting about a best practice and yet only one of them changes their behaviour.

This has totally changed the way I see the group of professionals who attend my EBD courses. I have a 1 in 17 chance if I simply stick to a single educational meeting of changing their behaviour or it could be worse, since the risk difference was 0 for complex behaviours (i.e. NNE=1/0=infinity - I'd have to educate all the world's dentists in EBD and still I wouldn't expect one to change their behaviour...)

I think it's time to change the way we think about what we aim to do as educators. Whether we're teaching periodontics, endodontics, restorative dentistry - whatever - we need to look at those EPOC reviews and think about how best we facilitate changes in our own and others' behaviour so that as a professional group our practice becomes more evidence-based. It may be that we have to think out of the box and to engage other practices such as finding 'opinion leaders', developing outreach programmes, and encouraging the use of audit whilst providing feedback. 

Anything to make the chances of success better than 1 in 17.