Connect
MJA
MJA

The Monash University Consortium: factors involved in the local implementation of clinical evidence into practice

Malcolm K Horne
Med J Aust 2004; 180 (10): S89. || doi: 10.5694/j.1326-5377.2004.tb06077.x
Published online: 17 May 2004

Abstract

  • As part of the Clinical Support Systems Program, the Monash University Consortium conducted a project to identify factors influencing the implementation of clinical evidence into routine hospital practice.

  • Training was required in the process of clinical practice improvement (CPI) and the nature of evidence.

  • One of the most helpful instruments for change was to point to active models of quality assurance as exemplars.

  • Staff can be trained to be good managers, but leadership is less susceptible to training and is better obtained by selective recruitment.

  • CPI requires rapid feedback on the effectiveness of the implementation. Access to this information and the confluence of management skill, an ability to translate research evidence into routine clinical behaviour and an understanding of the process of quality assurance are central.

  • Effective CPI is only possible when the larger hospital administrative culture is committed to providing the necessary resources.

As part of the Clinical Support Systems Program (CSSP), the Monash University Consortium investigated the application of a clinical practice improvement (CPI) model. The Consortium set out to examine what factors impeded the implementation of clinical evidence into the routine practice of an acute hospital team.

Specifically, we compared and contrasted the approaches to stroke management at four different hospitals in Victoria: Monash Medical Centre (a tertiary teaching hospital), Frankston Hospital (a metropolitan general hospital), St Frances Xavier Cabrini Hospital (a large private hospital) and Warragul Hospital (a rural hospital).

Team leaders from each site (Paul Talman, Director of Stroke, Monash Medical Centre; Prakesh Nayagam, Senior Physician, Frankston Hospital; Judith Frayne, Senior Physician, Cabrini Hospital; and Bruce Maydom, Senior Physician, Warragul Hospital) reviewed the evidence for activities or treatments that represented “best practice” in managing acute stroke. This was done through a series of workshops in which indicators or surrogate markers that would indicate implementation of best practice were developed.

The aim was to gather a relatively small, manageable core data set to establish whether each patient received “best practice” according to the indicators agreed upon, rather than to accumulate comprehensive data about each of the decision points in the protocol. Initially, the data were collected on paper; subsequent computerisation of data collection allowed much more rapid and timely feedback.

Before undertaking the study, a number of clinical and organisational strategies were developed to promote the recording of the core data set and to encourage awareness of the importance of implementing evidence into routine clinical practice. These included newsletters, education programs on the clinical consequences of failure to implement best practice and a “league table” comparing the performance of the four teams. These interventions were locally implemented and monitored through a cyclical process of planning, implementation, measurement (data collection) and review. However, the point of the project was not so much to collect and analyse the data but rather to observe and document the processes, procedures, obstacles and efficiencies of each of the teams in trying to implement and measure the activities or treatments identified as best practice. In each cycle, the local sites had full responsibility for implementation and measurement. In the initial cycles, a group of representatives from each site carried out planning and review, but in later cycles each site was encouraged to begin developing local interventions and to take full responsibility for planning and review as well as implementation and measurement.

Project outcomes

In the two years of the CSSP (2000–2002) we identified some important factors that influence whether evidence can be put into practice. In the main, these factors are qualitative and subjective. While this does not minimise their importance, it makes it more difficult to be categorical about ensuring that routine practice is based on evidence. Nevertheless, we contend that, unless these factors are addressed and resolved in a way that is locally relevant, it is unlikely that evidence will be implemented into practice in a sustainable way.

Training and education

We were surprised at the degree to which training and education were required to enable staff to participate effectively in the CPI project. Education was required to overcome the concept of the randomised controlled trial (RCT), which is deeply entrenched in the clinical psyche and was a major hurdle to implementing CPI. In the RCT model, professional data collectors derive frequently voluminous, detailed and all-inclusive data to be analysed at the end of a study, while at the same time trying to minimise the impact on day-to-day clinical experience. On the other hand, CPI is a cyclical process carried out by the clinical team, who themselves design a small study to collect only data relevant to a utilitarian question about practice. The clinical team collects the data, immediately analyses them for their utility, modifies the questions or data collection process or behaviour and repeats the cycle.1,2 Thus, while the RCT is central to the acquisition of evidence, it provides many distractions to the introduction of CPI. To firmly entrench CPI in clinical practice, training needs to be directed at instilling the principles outlined in Box 1.

Training improved the skill of the team in recognising and understanding the nature of evidence and in identifying and demonstrating that evidence had been translated into practice. While this seems simple enough, it belies the underlying complexities and steps required to bring the staff to this level. Establishing this skill level took much of the two years of the CSSP at Monash — indeed, it was not possible to examine some of the other important structural, organisational and attitudinal factors that affect the implementation of best practice until this was achieved.

Leadership

The distinction between management (which implies preserving the status quo) and leadership (which calls for responsiveness to change, creative problem-solving and the ability to motivate others) has been made before,3-5 and the differences were clearly observable in our study (Box 2). The hospital context, which bolsters clinical safety and efficiency through routine and reiteration, emphasises management. Management processes of this kind are well developed, and consequently educating staff about management was not the principal issue. Instead, management needed to be harnessed and redirected to the task of day-to-day collecting and reviewing of data for evidence of best practice. On the other hand, the process of creating change and of cleaving out administrative “space” and resources requires leadership. At some of our sites, both leadership and management were apparent by the end of the project. While desirable management characteristics were readily apparent and have been well described in the past, the qualities of good leadership are less tangible (although it was usually recognisable when present). We also suspect that leadership may be less susceptible to training and that good leaders may be best obtained by selective recruitment.

Organisational support for cultural change

For the CPI process to become embedded in clinical practice, cultural change must occur. Staff require confidence to question their own practice and to experiment in ways that require long-term personal and institutional investment. This is only possible when the larger hospital administrative culture places sufficient store on implementing best practice to provide the necessary resources. These may include computer support or funded time for staff to devote to education and data collection. Perhaps most importantly, institutions must provide buffers against demoralising structural changes that cut across hard-won improvements in practice. Regrettably, we found little evidence that the commitment of institutional administrators to best practice matched the commitment of their clinical staff, despite the potential benefits to the institution of adopting a CSSP approach (Box 4).

  • Malcolm K Horne

  • Howard Florey Institute, University of Melbourne, Parkville, VIC.


Correspondence: 

  • 1. Demming WE. Quality, productivity, and competitive position. Cambridge, Mass: Massachusetts Institute of Technology Center for Advanced Engineering Study, 1982.
  • 2. Joss R, Kogan M. Advancing quality: total quality management in the National Health Service. Buckingham, UK: Open University Press, 1995.
  • 3. Schein E. Organizational culture and leadership. 2nd ed. San Francisco: Jossey Bass, 1992.
  • 4. The leaderlab model. Center for Creative Leadership, 2000. Available at: www.ccl.org/CCLCommerce/index.aspx (accessed Apr 2004).
  • 5. Senge PM. The fifth discipline: the art and practice of the learning organization. London: Random House, 1990.

Author

remove_circle_outline Delete Author
add_circle_outline Add Author

Comment
Do you have any competing interests to declare? *

I/we agree to assign copyright to the Medical Journal of Australia and agree to the Conditions of publication *
I/we agree to the Terms of use of the Medical Journal of Australia *
Email me when people comment on this article

Online responses are no longer available. Please refer to our instructions for authors page for more information.