Article Text

Download PDFPDF

Admissible evidence
  1. PETER CROFT
  1. University of Keele, School of Postgraduate Medicine, Industrial and Community Health Research Centre, Thornburrow Drive, Hartshill, Stoke on Trent ST4 7QB

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

An article appeared recently in a journal for general practitioners, in which the author (a general practitioner) told the story of his own shoulder pain.1 He had had some physiotherapy that made it feel better, and then had read a meta-analysis that said that physiotherapy was of no proven benefit for shoulder pain. His shoulder pain had recurred and he had had an injection that made it feel a lot better, and then had read a meta-analysis that concluded that there was little scientific basis for injecting the shoulder. “Are we using the right sort of evidence?” he pleaded in desperation.

Musculoskeletal doctors will recognise that plea. The GP had admittedly overinterpreted the meta-analyses because each one would have ended with the observation that “there have been too few randomised controlled trials of sufficient quality on which to base a judgement of effectiveness”. This does not mean that physiotherapy or shoulder injections are useless. The absence of evidence is a very different affair to having clear evidence that an intervention does not work. Why then will so many clinicians feel sympathy with the perplexed GP?

If it moves, randomise it

The proponents of evidence-based medicine are careful to emphasise that all types of evidence are admissible in judging the most effective and efficient way to treat an individual patient.2However, the main manifestation of evidence-based medicine worldwide (the Cochrane Collaboration) gives primacy to evidence from randomised controlled trials. At a strictly scientific level this is fine: it is the closest we can come in the dirty reality of daily life to the paradigm of the laboratory experiment. By randomising we create groups of patients who do not differ in any systematic way except for the one in which we are interested (the intervention). The comparison is thus focused on the intervention, uncluttered by the confounding factors that influence choice of treatment in practice. Cochrane’s notion was that we could “RCT” anything.3 This leaves us with a few problems however.

Firstly, in many fields of health care we are not very good at randomised controlled trials. That is the real conclusion of those meta-analyses of shoulder trials. An overview of systematic reviews in the whole field of soft tissue rheumatism would conclude that there are very rarely sufficient trials of good quality to produce a definite result. It is no coincidence that the great achievements of the randomised controlled trial have been in more clearcut diagnostic areas than musculoskeletal pain, when mortality or well defined morbidity are the outcomes, and when pharmaceutical or technological interventions are under scrutiny. The use of aspirin after heart attacks to prevent death and recurrence is a classic example.4 Things are more difficult in the rheumatological field where diagnosis is uncertain, outcome non-specific, understanding poor, and the mechanism of many interventions unexplained. The sheer complexity of symptoms weighs against easy answers, and the “availability of evidence will vary enormously from specialty to specialty”.5

Secondly, the strongest influences on outcome in the field of muscloskeletal pain are often likely to be non-specific aspects of the consultation. The reduction in pain observed in placebo groups during trials of treatments is one example. At the moment many RCTs go forward on the basis that “we have these two treatments which have been around for a long time, let’s randomise patients and observe which intervention is better”. We have to accept however that much of the improvement will occur because of factors that we neither understand nor have the ability to control for. The arts of the healer are still important, as is patient choice, witness the finding in a Dutch trial of shoulder treatment that patients did better who were randomised to the treatment that they had indicated in advance they would prefer to receive.6

Such concerns make clinicians nervous about evidence-based medicine and have spawned vituperative exchanges in the literature about its place and authority. Furthermore the proposed practice of evidence-based medicine—the idea that clinicians should search the literature around an individual consultation to reach an informed judgement on the best supported treatment—has itself been challenged. (‘We deplore attempts to foist (evidence-based medicine) on the profession as a discipline in itself’ thundered theLancet 7). Indeed the obvious question is where is the evidence that this method of practice is better than others?

However, we need to avoid throwing the baby out with the bathwater. The RCT can shed light on areas of ignorance and the systematic reviewing of all available trials does help us to define where those areas of ignorance lie. As long as we see the RCT as one method among many, as one piece of evidence about one aspect of practice, then we can value it without expecting it to provide the answer to everything.

Questions can also be raised about the current scientific status of the systematic reviewing and meta-analysis of RCT evidence. Take a recent publication on placebo controlled trials of topical non-steroidal anti-inflammatory drugs.8 The authors highlighted the scepticism among doctors about the value of such preparations, which have been blacklisted by some purchasers because of apparent lack of evidence for their efficacy. The review found overall evidence of their effectiveness in relieving pain in acute and chronic conditions and this result may very reasonably influence practice. Contrast this with a recent and similarly impressive systematic review of trials of homeopathy.9 Despite the conclusion of the article of overall effectiveness in a variety of conditions, two editorialists found this difficult to swallow, because there is no rational mechanism for homeopathy. Mutterings about small trials, wide variation in treatment preparations, and biased selection of studies, have since been heard in relation to this homeopathy review, but not about the NSAID paper. For a prominent newspaper columnist, the homeopathy review has “been demolished”.10 Science is fine, it seems, so long as it fits our beliefs.

Who pays the purchaser?

Perhaps the most troublesome aspect of evidence-based medicine has been the eagerness with which health care purchasers have embraced the concept that only care of proven effectiveness should be purchased. There is a danger of inequity here. A policy that good RCT evidence must support purchasing decisions means that those topic areas in which RCTs are easier to do, and are more clearcut in their results, will get preferential support. Preventive medicine goes by the board, as does care of the elderly and disabled—the groups in whom musculoskeletal conditions dominate. Such topics do not provide the same level of hard evidence for purchasing that drugs and technology can.

What is more, evidence-based purchasing might mean that responsibility for decisions that are heavily value laden are off loaded on to some seemingly neutral science. Hunter (a professor of health policy) has voiced concern that evidence-based purchasing may become an instrument of control, stifling debate under the illusory banner of scientific certainty.11 Klein points more gently to the likelihood that the current purchaser enthusiasm for scientific decision making will end in disillusionment with science because of the excessive expectations as to what it can deliver.12 In particular, there is no guarantee that purchasing based on best evidence will inevitably reduce the costs of health care.

Other types of evidence

Leading evidence-based medicine clinicians have always emphasised the range of evidence that can be used in clinical decision making,2 although the practice of evidence-based medicine has focused on the RCT. Given the absence of RCT evidence for much activity in the field of soft tissue rheumatism for example, are there other types of evidence we should consider?

Firstly, “no evidence” does not mean “no action”. Sensible clinical experience for a start can provide clear insights into under-researched areas of activity, such as the management of neck pain in primary care (see acknowledgements).

Secondly, observational studies of prognosis can be hugely informative. A prospective study of low back pain in America followed up different groups of patients who had variously presented their acute problem to either surgeons, family practitioners, chiropractors, physical therapists or insurance doctors.13 Six months on and the improvement was remarkably similar in all cases—chiropractic cost more because it involved more visits but satisfaction levels among the chiropractic patients were higher. Such a study raises important issues: for example, is the initial patient freedom (to choose whom to attend) therapeutically important? It also provides insights into the “natural history” of acute low back pain.

Thirdly, studies of diagnosis and referral indications can inform clinical decision making. Many diagnostic labels in musculoskeletal medicine remain “romantic”—derived from authority rather than from scientific study. For example radiographic studies have questioned the idea that cervical spine degeneration bears any strong relation to neck symptoms.14 So why use the term “cervical spondylosis”? Indeed, are labels or details from a radiographic report actually harmful to patients? In weighing the evidence for and against radiography of the spine, many clinicians will have found themselves explaining the evidence about radiation levels and the poor predictive value of radiographs in detecting serious disease, while listening to GPs and patients pointing to the “reassurance” of a normal radiograph.

All such evidence—from the epidemiological to the biological to the anthropological—is surely admissible. Appropriately chosen RCTs can inform decisions; good prognostic and diagnostic studies will do likewise. A major proponent of evidence-based medicine advises that evidence-based medicine should “build upon . . .the evidence gained from good clinical skills and sound clinical experience”.15 Many musculoskeletal clinicians will agree with a group of GP authors who made “no apologies for the inevitable and difficult process of interpreting and integrating scientific evidence with personal experience and knowledge of our patients”.16

Making choices: adding the context

The crucial step for clinician or patient in using scientific evidence is one that is understated in talk of randomisation, bias, and validity. It is the step from the science to the real world, and in particular from the group to the individual patient. Any study is carried out on a select population, at a unique moment in time, entailing specific types of intervention. That does not weaken the science, but it does mean that the science can only partially inform the discussion about what to do in your own patch. The science has to be given a context to be interpreted. If it is not, it leads to what Grimley Evans has called “evidence-biased medicine”.17

When the Medical Research Council published its follow up study comparing chiropractic with usual NHS physiotherapy in the treatment of low back pain,18 the first question the morning radio interviewers asked was “does this mean that GPs should send all their back pain patients for chiropractic?” This very practical question could not be answered: the subjects in the trial were volunteers, they all had radiography, there had been many exclusions, and the physios had shouted “foul” because they felt that it was not their discipline but the circumstances under which they worked in general hospital settings that had been under trial. In other words the “context” was at issue. But overall improvement had been observed in the disability score in the chiropractic group that was superior to that in the physiotherapy group. This at least can help planners to think of chiropractic as an option. However, chiropractic worked “on average”, and the GP contemplating the individual patient is assailed with averages—on average a radiograph will be of no help, on average chiropractic might provide some benefit. And of course, on average, whatever is done, the patient is likely to feel better in a week or two. Putting a range of evidence into the context of the individual patient’s history and circumstances is all part of the clinician’s daily work, but does not describe the main job of purchasers or public health doctors.

Conclusion

The speed with which the phrase “evidence-based medicine” has entered our language has been astonishing. It has carried undoubted benefits in its wake, not least the realisation that a literature search will never be the same again and that the narrative review has a strong rival. There is a duty in health care to bring the best that science can offer to the patient, but the individual consultation may contain far more than can be guided by the RCT. The breadth of evidence that is admissible in considering the individual patient must be re-emphasised. In doing so it becomes clear that much that appears new is as old as the healer’s art itself.

Acknowledgments

Thanks to the Primary Care Rheumatology Society members with whom I worked on the problem of neck pain in primary care and to Heiner Raspe who introduced me to the literature on evidence-based medicine. Jan Cohen typed the manuscript.

References