Showing posts with label critical appraisal. Show all posts
Showing posts with label critical appraisal. Show all posts

Monday, 16 September 2013

Presenting Evidence on Clinical Topics

How to do a presentation on EBD and Endodontics

A former student contacted me yesterday to ask if I could help provide some guidance on how to go about doing a presentation on EBD and endodontics. Where should he start, he asked and how should he go about it?

So I thought I'd blog a response in case anyone else has to do an EBD presentation on some aspect of dental care.

A little bit like doing endo itself, preparation is key to a successful outcome. So it is with a presentation on EBD. We may only have 10 or 20 minutes to present what we have found but probably several hours will be needed to deliver something that is informative and, importantly, backed by the most up to date research.

My first question back to my new colleague is how broad he would want to be in addressing the topic. Simply being asked to "present on EBD and Endo" unsurprisingly caused him to panic a little. The field is huge so what would he present on?

Questions

So, as with a research topic or any literature search, developing a clear idea of a clinical question to present evidence on would be my first step. Using the PICO structure (see a blog explaining this here and a prezi here) I would think about whether I was interested in a question about:

  • diagnosis (e.g. how sensitive are tests for non-vitality?) 
  • prevention (e.g. how effective is partial caries removal compared to full caries removal in preventing irreversible pulpal damage?)
  • treatment or interventions (e.g. is one stage endo better than two stage?)
  • prognosis (e.g. what is the success of re-treatment over 10 years?)
  • patient or practitioner experience (e.g. how did clinicians get on with using a particular technique for obturation?)
Sometimes it takes a while to decide how broad / narrow you want to be and sometimes only after you have begun to 'scope' the literature do you get a sense of how much research there is likely to be to help answer your question.

At this stage it is also worth thinking about the best type of study or studies to answer the question. For non-complex interventions a systematic review of randomised controlled trials or the trials themselves may be most appropriate. For a prognosis question a cohort study that follows patients with a particular condition over a period of time could be suitable (or indeed one arm of a controlled trial). For a question about experience and values a qualitative study design could be best. The point is that we shouldn't concentrate only on RCTs when looking for evidence as they are not always the only or best way to answer certain questions. You can find some guidance on the best types of studies to answer questions here.

Search

The next stage, then, is to look for the research evidence. There is a growing recognition that we need to get better at recognising and being critical of non-research evidence too - particularly our own experience and the views of our patients - and of combining these in an optimal way (we're still working on it...). 

But we need to search the various medical databases and search engines to find the research evidence first. An efficient way to do this is to look first of all for summaries of evidence, such as guidelines, and systematic reviews. If we can find one that is up to date and relevant to the question we asked then we need not look for primary studies. So I would recommend beginning with the Cochrane Library for reviews or the National Guideline Clearinghouse for guidelines. You might search the EBD Journal website too to see if there are any commentaries on research there or the ADA Center for EBD.

Given the time - often a couple of years - to complete a review or guideline, anything that is more than a couple of years old is probably out of date since the most up to date research they include may by then be 4 years old. So if the review seems old, irrelevant or there simply isn't one, then we need to look for primary studies. PubMed is an open access medical database that allows this. There are a couple of helpful introduction videos by my colleagues at Oxford here and here.

As you become more familiar with PubMed you can limit the number of results you use by using filters for systematic reviews or randomised controlled trials. A video explains about this here. The advantage is that we can cut the number of articles we have to look through from hundreds or thousands to maybe dozens or less.

Of course, you could also ask the excellent library staff at the British Dental Association to do a search for you. As with your own search it's best to have a  clear question to give them or they may end up searching for things you're not interested in. This service is free to members of the association. The BDA also houses collections of papers on over 500 topics at their London site and these can be posted out to members at no charge.

Accessing the research you find

One of the biggest problems we face is that much of the research out there sits behind a pay wall and few of us are willing to fork out $25 to read a paper that may be irrelevant or of poor quality. I have blogged here about this problem. Again, the BDA can help out but at a cost of £2.50 per article. Unfortunately, my experience with Athens is that this provides minimal access to relevant journals. This is why up to date Cochrane Reviews are so valuable to us as they're free to anyone in the UK and several other countries (see here if you're not sure).

Get critical

Not all research is equal in terms of its validity. If you manage to find a systematic review there should be an indication of the quality of the primary studies included. There are various schemes for this and Cochrane now use GRADE criteria that assesses the research to be of very low to high quality. But if you're reading the primary studies yourself a checklist such as those produced by the CASP organisation are helpful to quickly get a sense of the methodological quality of a study and its usefulness to you.

My personal view is that any CPD presentation ought to indicate the quality of the evidence being presented. Normally on a course there is a mix of personal and research experience and I think that we are entitled to know which is being used. Likewise, if we are to do a presentation of the evidence-base for different topics around endo then the audience should be given a summary of how strong the evidence is. After all, why go through the cost and time of changing one's practice if the only research suggesting you should is of a very low quality? The quality of the evidence should determine whether or not we consider implementing it.

Implementation

One of the areas in EBD that is most complex is how to go about implementing change based on high quality evidence. It is recognised that most of us transform research findings rather than implementing them as reported in the research. I think that in a presentation it would be helpful to discuss what the barriers are to changing practice and how we might go about reducing these. Perhaps we need to compromise on some element of the protocols suggested by the research to make it practical and cost-effective in our practice. Perhaps we need to think about forming a group to keep each other motivated as we seek to change practice as most of us are very poor at changing what we do on our own. 

I would include these in any presentation as EBD is useless unless this important step is achieved.

I won't talk about presentation skills here - there are many much more gifted in those than me but I take inspiration from Steve Jobs who rarely used script and stuck to simple messages with plenty of graphics to enthuse the Apple-lovers out there.

Happy EBD presenting :)

Wednesday, 21 August 2013

Why quantitative studies cannot deliver evidence-based practice alone

Qualitative methodology is pants and has no role in evidence-based practice


It's not uncommon to share a room with a colleague who is repelled by the idea that qualitative research could contribute to improving patient care. There are many more (and I was one) who just don't get where qualitative research fits in and it seems to me that the evidence-based practice (EBP) movement, in some cases deliberately, in others not, has fostered an ideal. And that ideal is quantitative.

For the study of the efficacy (how well it worked in the study) and, indeed, the effectiveness (how well it worked in practice), of a drug the randomised controlled trial with its quantitative output of numerical data on its success or otherwise in treating a given condition is the ideal and I would not argue otherwise. For the limited context and the restricted set of patients in which the trial is conducted, if well done, it will allow some estimate of the "truth" of the efficacy of the treatment. At least for the outputs being measured.

But evidence-based practice is about much more than a risk or odds ratio and p-values or confidence intervals. These numerical - quantitative - outputs are but a tiny element of what I understand as evidence-based practice (EBP).

Rather than look at qualitative studies per se e.g. "how dentists use or don't use  evidence in their practices" (rather than "how many use evidence"), for this blog I just want to draw attention to the way we use qualitative methods to deal with quantitative data in EBP.

Or perhaps it does...


Since the early days of EBP there has 1) been a need to consider the patient's values and aspirations 2) the need to consider our own experience and expertise and 3) a requirement to critically appraise the literature we read. Let's not forget, of course, that there's also been a requirement to use the best available research to inform the discussion.

So if I have a study that tells me that putting a stainless steel crown on a carious deciduous tooth rather than filling it with a composite will result in 12% fewer such teeth needing to be extracted I am grateful for this quantitative information on the efficacy of the intervention. I need this to understand what the potential benefit of using it in my patient could be from the point of view of losing a tooth.

Qualitative critical appraisal



However, in order to evaluate the risk of bias - that is, the risk that the result is not the true reduction in tooth loss due to some systematic error in the design of the study - I would critically appraise it. The thing is that there don't seem to be reliable quantitative ways of doing this. We can score whether the two groups were "randomised" or not - perhaps with a 1 for yes and a 0 for no - but very quickly we ask - how were they randomised and what effect does it have if they don't tell us? We might see a table of baseline characteristics and there's a difference in the baseline amount of caries in the average child in each group - but what does that mean for the results? Perhaps the p-value is 0.04 or perhaps it is 0.004 - how do these different confidences in the estimate of truth affect the way we think about the results?

These are not questions that can be answered reliably quantitatively. In a sense we are analysing the text - the report of a study - to try and construct some idea of what it means. Does this explanation mean this is likely to be a reliable study or not? And this, I would argue, is a qualitative process: we are constructing an idea in our head of whether we think the story the report tells is likely to be the truth or not. Someone else could well construct a different opinion that is contrary to ours. How many times have you read in systematic reviews that disagreements were resolved through consensus or by a third reviewer?

Qualitative understanding of patient values and clinical experience and expertise


What about the other two essential elements of evidence-based practice - the patient's values and our experience and expertise? Here again it is hard to see how we can avoid using qualitative methods and where quantitative methods fail.

Contrary to the positivist "truth" from the study, for a patient, the truth of what is - for want of a better term - in their interests and meets their values and aspirations - could be very different. Perhaps the outcome of the study is not the outcome that interests them. Or perhaps, even if it is, they ascribe a different value to a tooth only lasting 1 year rather than 5.

Likewise, the truth for the clinician about the effectiveness of the treatment may be vastly at odds to the researchers' results as they try to run a small business, manage a clinic, decide which hands-on courses to attend (and which not to), and make sense of their colleagues' opinions about the research, its value, their experience using the treatment...

The issues of why people do things and what drives them to or not to are inherently qualitative and as clinicians trying to practice in an evidence-based way we make decisions in this way each day.

So I guess my conclusion here is that as we teach and train colleagues and students to practice EBP that we not forget the essential component that qualitative methods play in making sense of quantitative data and helping us use it where it is appropriate. As we move forward we may want to think of how we develop some rigour in this process as the various tools for critical appraisal have sought to do. 

Monday, 18 July 2011

Critical appraisal tools


We're running the introduction to EBD course for 2nd years. A couple of students asked for more information about assessing the papers they retrieve.

It is useful to have a framework that guides us when appraising a paper. There are several resources out there that can help you but here's a link to a website with a number of critical appraisal tools for different types of study:

http://www.sph.nhs.uk/what-we-do/public-health-workforce/resources/critical-appraisals-skills-programme

Hope that helps.