Traditional approaches to disseminating research findings have failed to achieve optimal healthcare.
In a systematic review of 235 studies of guideline dissemination and implementation strategies, we observed the following:
there was a median 10% improvement across studies, suggesting that it is possible to change healthcare provider behaviour and improve quality of care;
most dissemination and implementation strategies resulted in small to moderate improvements in care;
multifaceted interventions did not appear more effective than single interventions.
The interpretation of our systematic review is hindered by the lack of a robust theoretical base for understanding healthcare provider and organisational behaviour.
Future research is required to develop a better theoretical base and to evaluate further guideline dissemination and implementation strategies.
The dissemination of new research knowledge into healthcare has largely depended on publication of research in peer-reviewed journals and on continuing medical education programs. However, the effectiveness of these approaches has been questioned. Studies in the United States and the Netherlands suggest that 30%–40% of patients do not receive care complying with current scientific evidence and 20%–25% of the care provided is not needed or potentially harmful.1,2
Over the past decade, the consistent evidence that these dissemination methods do not result in optimal levels of care has led to increased efforts by policymakers and professionals to identify more effective implementation strategies. The Clinical Research Roundtable at the US Institute of Medicine recently suggested that failure to translate new knowledge into clinical practice and healthcare decision making was one of the two major barriers preventing human benefit from advances in biomedical sciences.3 In 1997, Grol observed that many current approaches to implementation are based on participants’ beliefs rather than evidence about the likely effectiveness of different approaches.4 He challenged healthcare systems to develop and use a robust evidence base to support the choice of implementation strategies, arguing that “evidence-based medicine should be complemented by evidence-based implementation”.4 How far are we from meeting this challenge?
National implementation research programs have been conducted in the Netherlands, the United Kingdom and the United States.5,6 We have recently completed a systematic review of 235 rigorous evaluations of different guideline dissemination and implementation strategies published up to 1998.7 The good news is that our review suggests that it is possible to change healthcare provider behaviour. Eighty-six per cent of studies observed improvements in process-of-care indicators (eg, percentage compliance with guidelines), with the median effect size across all studies showing an absolute improvement of about 10% in process-of-care indicators. While these effect sizes may be considered modest, from a population-health perspective they are likely to be clinically important.
Most dissemination and implementation strategies resulted in small to moderate improvements in care. For example, the median absolute improvement in performance across interventions was 14.1% in 14 cluster-randomised controlled trials (C-RCTs) of reminders, 8.1% in four C-RCTs of dissemination of educational materials, 7.0% in five C-RCTs of audit and feedback, and 6.0% in 13 C-RCTs of multifaceted interventions involving educational outreach. There was considerable variation in the observed effects within interventions: for example, the absolute improvements in performance across the C-RCTs of reminders ranged from –1.0% to +34.0%. Multifaceted interventions did not appear to be more effective than single interventions. Furthermore, we found the generalisability of the reported findings to other behaviours and settings to be uncertain, as most studies provided no rationale for their choice of intervention and gave only limited descriptions of the interventions and contextual data. Less than a third of studies reported any data on the resources required for the implementation strategy.
The UK Medical Research Council recently proposed a sequential framework for evaluating complex interventions such as implementation strategies.8 This scheme involves:
development of the theoretical basis for an intervention;
definition of components of the intervention (using modelling or simulated techniques and qualitative methods);
exploratory studies to further develop the intervention and plan a definitive evaluative study (using a variety of methods); and
a definitive evaluative study (preferably an RCT).
The framework recognises the benefits of establishing the theoretical basis of interventions and conducting exploratory studies to choose and refine interventions in order to minimise the number of costly “definitive” RCTs.
Although most of the studies included in our systematic review of guideline dissemination and implementation strategies could be considered “definitive” evaluations, there was little evidence that the investigators had developed a theoretical model to guide their choice of intervention. As a result, in many of the studies it was unclear why investigators had chosen a particular intervention, and we were not sure how to interpret the study results or how to assess their generalisability to different targeted behaviours, providers and contexts.
Most of the theoretical research on implementation has attempted to develop broad frameworks that capture all factors that may influence behaviour. The resulting frameworks have usually been descriptive, identifying factors that have facilitated or hindered the adoption of evidence-based practice. However, these frameworks provide little information about what are the most important factors facilitating or hindering change or what interventions may be useful in specific settings.
An important focus for future research should be to develop a better theoretical understanding of professional and organisational behaviour change. Ferlie and Shortell9 have suggested four levels at which interventions to improve the quality of healthcare might operate:
the individual health professional;
healthcare groups or teams;
organisations providing healthcare; and
the larger healthcare system or environment in which individual organisations are embedded.
To develop a full scientific rationale for interventions to produce behaviour change in healthcare, we need to consider educational, behavioural, social and organisational theories relevant to each of these four levels. There are many such theories, but their applicability to healthcare professional and organisational behaviour has yet to be established. Further research is needed to test the applicability of such theories in healthcare settings and to rigorously evaluate different dissemination and implementation strategies.
Thus, we are currently some way from meeting Grol’s challenge.4 Decision makers still need to use considerable judgement about which interventions are most likely to succeed, after considering the feasibility, costs and benefits that particular interventions are likely to yield. Nevertheless, there are grounds for optimism; it is possible to achieve clinically important practice changes with current interventions that appear to be largely based on the considered “gut instincts” of investigators.
We believe that establishing an empirically tested theoretical base for healthcare professional and organisational behaviour is likely to lead to incrementally more effective interventions. This task will require sustained investment and support from research funders, the development of interdisciplinary research teams, and the support of healthcare systems and professionals, but does not seem any more inherently difficult or problematic than other challenges facing the health research enterprise.
- 1. Schuster M, McGlynn E, Brook RH. How good is the quality of health care in the United States? Milbank Q 1998; 76: 517-563.
- 2. Grol R. Successes and failures in the implementation of evidence-based guidelines for clinical practice. Med Care 2001; 39(8 Suppl 2): II46-II54.
- 3. Song NS, Crowley WF, Genel M, et al. Central challenges facing the national clinical research enterprise. JAMA 2003; 289: 1278-1287.
- 4. Grol R. Beliefs and evidence in changing clinical practice. BMJ 1997; 315: 418-421.
- 5. Hanney S, Soper B, Buxton M. Evaluation of the NHS R&D Implementation Methods Programme. London: Health Economics Research Group, Brunel University, 2003.
- 6. Agency for Health Research and Quality. Translating research into practice II (TRIP II). Washington, DC: AHRQ, 2001.
- 7. Grimshaw JM, Thomas RE, MacLennan G, et al. Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess 2004. In press.
- 8. Medical Research Council. A framework for development and evaluation of RCTs for complex interventions to improve health. London: Medical Research Council, 2000. Available at: www.mrc.ac.uk/pdf-mrc_cpr.pdf (accessed Jan 2004).
- 9. Ferlie EB, Shortell SM. Improving the quality of health care in the United Kingdom and the United States: a framework for change. Milbank Q 2001; 79: 281-315.
Publication of your online response is subject to the Medical Journal of Australia's editorial discretion. You will be notified by email within five working days should your response be accepted.