Can clinical governance deliver quality improvement in Australian general practice and primary care? A systematic review of the evidence

Christine B Phillips, Christopher M Pearce, Sally Hall, Joanne Travaglia, Simon de Lusignan, Tom Love and Marjan Kljakovic
Med J Aust 2010; 193 (10): 602-607. || doi: 10.5694/j.1326-5377.2010.tb04071.x
Published online: 15 November 2010


Objectives: To review the literature on different models of clinical governance and to explore their relevance to Australian primary health care, and their potential contributions on quality and safety.

Data sources: 25 electronic databases, scanning reference lists of articles and consultation with experts in the field. We searched publications in English after 1999, but a search of the German language literature for a specific model type was also undertaken. The grey literature was explored through a hand search of the medical trade press and websites of relevant national and international clearing houses and professional or industry bodies. 11 software packages commonly used in Australian general practice were reviewed for any potential contribution to clinical governance.

Study selection: 19 high-quality studies that assessed outcomes were included.

Data extraction: All abstracts were screened by one researcher, and 10% were screened by a second researcher to crosscheck screening quality. Studies were reviewed and coded by four reviewers, with all studies being rated using standard critical appraisal tools such as the Strengthening the Reporting of Observational Studies in Epidemiology checklist. Two researchers reviewed the Australian general practice software. Interviews were conducted with 16 informants representing service, regional primary health care, national and international perspectives.

Data synthesis: Most evidence supports governance models which use targeted, peer-led feedback on the clinician’s own practice. Strategies most used in clinical governance models were audit, performance against indicators, and peer-led reflection on evidence or performance.

Conclusions: The evidence base for clinical governance is fragmented, and focuses mainly on process rather than outcomes. Few publications address models that enhance safety, efficiency, sustainability and the economics of primary health care. Locally relevant clinical indicators, the use of computerised medical record systems, regional primary health care organisations that have the capacity to support the uptake of clinical governance at the practice level, and learning from the Aboriginal community-controlled sector will help integrate clinical governance into primary care.

Improving the quality of primary health care is a complex undertaking. Over the past decade, clinical governance has been promoted as a systematic, integrated approach to assuring safe, good quality health care.1 Its intent is to move beyond the organisational “magic bullet” of single strategies (eg, professional education, audit, or risk management) to a systematic, multifaceted approach to quality improvement using a range of locally implemented strategies.

In Australia, most work on clinical governance has been in hospitals2 In the past 5 years, however, there has been an emerging interest in clinical governance in primary health care. Failures in health care delivery occur frequently in primary health care, just as they do in hospitals. In a study using anonymous reporting of adverse outcomes in Australian general practice, errors were estimated to occur in one in 1000 consultations;3 of 433 errors, nearly 70% were attributed to failures in the processes of health care, and 30% to the failure of individual general practitioners’ knowledge and skills.4 Partly in response to this, the Royal Australian College of General Practitioners (RACGP) recently introduced clinical governance into its accreditation standards.5 The Victorian Healthcare Association has developed a range of resources and guidelines for clinical governance in community health.6 Improving quality, safety, accountability and performance is one of the four priorities in the draft national primary care strategy.7 The final report of the National Health and Hospitals Reform Commission (NHHRC) advocated strategies and infrastructure for quality improvement, and these are integral elements of the NHHRC’s recommendations on continuous learning, monitoring and interprofessional and intersectoral collaboration.8

Given the policy development in this area, there is a need to clarify models of clinical governance and the evidence relating them to quality improvement. Here, we review the literature on different models of clinical governance, exploring their relevance to the Australian primary care sector, and their potential impacts on quality and safety.


The key question we examined was: What models of clinical governance deliver quality of care? We undertook a systematic review and realist synthesis of the literature.9 Realist synthesis is particularly relevant for complex social interventions, as it aims to develop explanatory analyses of why and how these interventions may work in particular settings and contexts. It is therefore relevant for policymakers who are keen to explore whether or not a model may work in a given context.10 Realist synthesis involves developing theories and models of how mechanisms work, using careful analysis of the literature and interviews with stakeholders.

We developed an operational definition of clinical governance by reviewing health policy on clinical governance in Australia, the United Kingdom and New Zealand (Box 1). We also reviewed the seminal papers on clinical governance referenced in those policy documents,11-16 and in the position documents on systematised quality approaches in health care in the United States.17 Our definition of quality used the nine dimensions from the 2001 National Health Performance Framework18 (Box 2).

Eligible studies

Studies were included if they were identified through the search term “clinical governance” or used a combination of strategies outlined in the definition to improve quality or accountability, and were either conducted in or applicable to the primary care sector. Studies were excluded if they focused on single strategies, or did not aim to improve quality or accountability. Search terms and strategy, and data analysis methods can be viewed in the complete report at

Information sources

We searched 25 electronic databases, scanning reference lists of articles and consultation with experts in the field. Our search was largely limited to publications in English after 1999. A search of the German language literature for a specific model type was also undertaken on the recommendation of a member of our international reference group. The grey literature was explored through a hand search of the medical trade press and websites of relevant national and international clearing houses and professional or industry bodies.

Data extraction

All abstracts were screened by one researcher, and 10% of abstracts were screened by a second researcher to crosscheck screening quality. Studies were reviewed and coded by four reviewers using standardised data extraction sheets and a data dictionary to record study type and quality, health service type, elements of quality in health care and relevance to the Australian context. Indicators that a study might be relevant for the Australian context were: (i) heterogeneous services with different systems of funding and governance; (ii) cultural separation of medical professionals from bureaucracy; and (iii) loose networks of primary care services. We included well-theorised and contextualised case studies and descriptive studies. All studies were rated using standard critical appraisal tools such as the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE)19 checklist. Two authors (C M P, S de L) also undertook a review of Australian general practice software for its potential contribution to clinical governance.

Development of models of clinical governance and their relevance to Australia

An iterative process was used to develop models of clinical governance suited to Australia, in which the mechanisms underpinning successful models were subjected to further review. Interviews were held with 16 key informants to review the relevance of the models and interrogate their feasibility. Interviewees were selected to illuminate clinical governance processes at:

  • the service level (managers and clinicians in four key clinical services identified by their peers as being experienced in clinical governance);

  • the regional level (representatives of regional health bodies — the Victorian Healthcare Association, a state-level Aboriginal service, National Aboriginal Community Controlled Health Organisation, Australian Primary Care Collaboratives, and Pegasus Health (a New Zealand not-for-profit organisation that supports 95 practices in the Christchurch/Canterbury area); and

  • the national level (representatives of accreditation bodies, GP education bodies, Kaiser Permanente [a US integrated managed care consortium], the UK General Medical Council, and the UK National Primary Care Research and Development Centre).

Interviews (mean length, 57 minutes) were transcribed and coded using an emergent coding framework in NVivo, version 8.0 (QSR International, Melbourne, VIC).

This study was approved by the Australian National University Human Research Ethics Committee.


Eighty-seven studies were assessed as being of high quality (Box 3) and, of these, 19 explored the outcomes of clinical governance. These 19 articles are the focus of this review. Seven were randomised controlled studies, 11 were longitudinal observational studies, and one was a well theorised and constructed case study. Six studies were from England,20-25 two from the US,26,27 four from Australia,28-31 two from New Zealand,32,33 and one each from Spain,34 the Philippines,35 Belgium,36 and Germany.37 Only one study compared clinical governance models in two countries, Holland and the US.38

Most studies addressed capability.20-23,25-32,35-37 There were no studies that addressed continuity or appropriateness of care, and few that addressed responsiveness20,24,25,27,34,35 and accessibility.20,26,30,33,35 Only four studies addressed safety.24,25,31,37 Although many studies explored incentive systems, very few described a clinical governance process, but rather posited a kind of black box between the incentive and the outcome. As a result, only four studies25,30-32 explored the relationship between clinical governance and efficiency.

The models that emerged from the literature and interviews are summarised in Box 4. The strategies most used were audit, performance against indicators, and peer-led reflection on evidence or performance. Assessment of clinical competence was described relatively infrequently in the studies as a governance strategy; there are more articles describing the “enticement” of practitioners into clinical governance than “hard governance” measures like assessment of professional competence.39

Box 5 outlines the evidence for each model’s contribution to the quality domains. Interventions at different levels can improve capability of care, but this improvement appears to be related to the type of care and how easy it is to systematise care processes. Improvements in capability are more easily achieved in prescribing practice than in chronic disease management, where the evidence for improvement is varied and context-dependent. There are fewer data on effectiveness (ie, whether the intervention results in a measurable health improvement) than on capability (ie, whether the intervention results in an improvement in skills or practice), but in two uncontrolled studies, there was an improvement in the outcomes of chronic disease (angina,20 asthma20 and diabetes29) without an improvement in capability. Both these studies suggest that trends and/or unmeasured contextual effects lead to changes in outcomes irrespective of clinical governance measures.

Most evidence supports governance models which use targeted, peer-led feedback on the clinician’s own practice. There is less evidence supporting interventions across entire health systems, such as national level performance indicators, unless this is accompanied by appropriate levels of support, including infrastructure support. Top-down models to drive quality improvement (external accreditation, or economic incentives for quality measures for doctors) resulted in no improvements in a range of quality domains, and in one country with limited resources (Philippines),35 led to a decline in accessibility. Sustainability may also deteriorate under a top-down model of governance without support and infrastructure, a risk also noted at interview by workers within the Aboriginal health sector.

Of the 11 software packages most commonly used in Australian general practice, only four enable the user to participate in regional or aggregated data quality activities. No system had inbuilt data quality checks (eg, “How often are angiotensin-modulating drugs prescribed without renal function being checked?”). Most have some inbuilt searches for items that relate to funding initiatives or chronic disease management. The ability to do other searches was quite variable and often required significant computer and database knowledge. Only one package allowed patient access through a web portal, to which both the practice and the patient must have subscribed. Achieving some degree of standardisation may be an essential step in the implementation of clinical governance.40

An emerging theme in the interviews was the lack of clarity around the definition and purpose of clinical governance. Concerns were expressed that the concept might denote governance of or by clinicians, rather than strategic approaches to improving clinical care. There was resistance among professional groups to the idea of externally set performance indicators, which had the capacity to grow beyond indicators which would improve practice. The Aboriginal sector in Australia has pioneered the development of locally relevant performance indicators, and also reported on the organisational costs of reporting to multiple disconnected performance indicators. Many interviewees commented on the difficulties of using information systems to best improve practice in Australia, with a great deal of energy being expended in extracting data from systems that did not intuitively allow this. This was in contrast to other settings (eg, Kaiser Permanente, and Pegasus Health’s data on prescribing practice), where regionalised aggregated data were available and could be compared with individual practitioners’ data.


The evidence in favour of clinical governance in general practice and primary care is fragmented. The largest body of evidence relates to the areas that are most easily measured, such as the quality of prescribing practices. There is less evidence for quality improvement in chronic disease management and complex areas like health care for older people and mental health care. However, the development of outcome indicators in chronic and complex care is challenging, and the lack of evidence of the impact of clinical governance may reflect measurement difficulties rather than a genuine lack of impact. More research is needed into interventions to improve safety, sustainability, efficiency and responsiveness in primary care.

The most acceptable mechanisms to drive clinical governance are those that recognise professional leadership and are perceived as being locally relevant and allowing reflection on one’s own professional practice. There is a reasonable evidence base to support the role of well resourced peer support networks to facilitate clinical governance. Clinical governance strategies that rely on guidelines alone have not proven effective among GPs. Performance indicators, clinical standards and guidelines — advocated in the Australian Government’s response to the NHHRC report,41 and in the draft National Primary Health Care Strategy7 — have a role in guided reflection and quality improvement. However, exclusive or excessive use of these top-down mechanisms can result in opportunity costs for the service,42 disruption of teamwork if the reporting task is delegated,43 and constructing clinical activities around indicators rather than need.44 An increase in the measurement of common chronic conditions that were the subject of incentives, over others that were not, was noted after the introduction of the Quality and Outcomes Framework in the UK.45

The Australian Government’s response to the NHHRC report proposed combining GPs and state-funded community health systems under the one regional structure, Primary Health Care Organisations (or Medicare Locals).41 This would provide a platform for regional networks to support clinical governance,46 and would presumably build on structures such as the Divisions of General Practice network or Victoria’s Primary Care Partnerships. The UK experience suggests that networks of practices interact to seek corroboration of the value of new information and evidence within local networks, before these practices establish patterns of clinical governance.47 Trust may be undermined if regional-level involvement in clinical governance is seen to be only the collection of data for reporting purposes.

A central difficulty in introducing clinical governance is a lack of consensus among primary care workers about its meaning. Clinical governance remains a poorly understood term, often equated with bureaucratic control, or medical dominance.48 There is a need for regional or national organisations to display leadership in explaining and demonstrating what clinical governance actually involves. Examples of leaders in this area are the RACGP in their standards work and the work of the Victorian Healthcare Association.

The Aboriginal community-controlled sector is in the vanguard of clinical governance in Australia. The development of clinical indicators in some services (eg, Kimberley Aboriginal Medical Services49) preceded mainstream work in this area. Across the sector, there has been a concerted and long-term development of capacity and personnel to drive clinical governance activities. The opportunity cost of resource-poor services directing personnel to data collection for reporting purposes rather than for data translation to service improvement is also well understood within the sector. Input from this sector should be sought from others in Australia to inform the implementation of clinical governance across all primary health care.

The lack of good information on practice in Australia is a critical constraint for clinical governance activities. In some organisations in the US and New Zealand, regional networks have access to detailed service-use data. In Australia, this level of feedback is only provided through the National Prescribing Service. In all other areas, specific clinical software is interrogated to generate routine data on individual practices, but the software itself is frequently insufficiently usable by service staff. As a minimum, service providers should have access to their own data on prescribing, tests ordered, referral patterns, and the demographic patterns and illnesses of their patients. Ready extraction of these data with a unified coding system should be built into the standards developed for clinical software.

For clinical governance to become second nature in Australian primary care, health care practitioners need to be actively engaged as partners in quality improvement. The evidence is strongest for improvements that are driven by health professionals at the practice level, with support from regional networks. Such activity at regional and service levels needs support through structural changes initiated at the national level. This should include funding of time for clinical governance, supported by information systems which provide ready access for practitioners to their own clinical data.

1 Definition of clinical governance used in this review

  • Clinical governance is a systematic and integrated approach to ensuring services are accountable for delivering quality health care.

  • Clinical governance is delivered through a combination of strategies including: ensuring clinical competence, clinical audit, patient involvement, education and training, risk management, use of information, and staff management.

2 Quality dimensions (adapted from National Health Performance Framework, Australia 2001)18

Quality dimension



An individual or service can achieve desired results with the most cost-effective use of resources


An individual or service can achieve desired health outcomes


An individual or service has the skills and knowledge to provide a health service


A service allows people to obtain health care irrespective of income, geography or cultural background


An individual or service avoids or reduces to acceptable limits the actual or potential harm from health care management


An individual or service provides care, intervention or action that is relevant to the client’s needs and based on established standards


An individual or service can provide uninterrupted, coordinated care or service over time


An individual or service provides respect for persons and is client orientated


A service can provide infrastructure such as workforce, facilities and equipment to respond to current and emerging needs

3 Data searching and inclusion flowchart

4 Description of clinical governance models

Level at which the model operates

Description of model


Strategies used

National level

National benchmarking with or without regional-level development support

United Kingdom Quality and Outcomes Framework

Reporting against national performance indicators

Regional level

Collaboration with other clinicians or with community, with targeted feedback on practice

Australian Primary Care Collaboratives; the National Aboriginal Community Controlled Health Organisation’s quality use of medicines program; New Zealand Pegasus Health’s peer-led networks

Comparing practice or practitioner outcomes against one another; peer-to-peer education exercises

Service level

Practice-determined organisation of quality management, using targeted feedback to health care workers with supported reflection

Audit and Best Practice for Chronic Disease (ABCD) project; Breakthrough Series methods

Audit; small group reflection on practice

Multilevel approaches

National level benchmarking and incentive-setting, regional network support, and support for practice-level organisation using targeted feedback

National Prescribing Service

Audit; small group reflection; comparing data on individual practice with regional averages; national performance indicators

5 Impact of models of clinical governance on quality

Type of model

Evidence for impact on quality

May improve

Conflicting or no evidence of impact

May worsen


National level

National benchmarking with regional level development support

Accessibility; capability


Campbell et al 200320*

National external benchmarking with no regional level support

Sustainability; responsiveness

Gene-Badia et al 200734

Responsiveness; capability


Catacutan 200635

Regional level

Collaboration with other GPs, with targeted feedback to improve practice

Capability; safety

Wensing et al 200437§

Effectiveness; capability; accessibility

Landon et al 200426*

Collaboration with other GPs to improve practice, without targeted feedback


Capability; accessibility; effectiveness

Scott and Coote 200730


Van Driel 200736§

Collaboration with community to set clinical priorities and/or monitor services


Crampton et al 200533

Service level

Practice-determined organisation of quality management, using targeted feedback to health care workers with supported reflection



Bailie et al 200729*

Effectiveness; capability

Valk et al 200438*

Safety; responsiveness

Fraser et al 200224§

Efficiency; safety; responsiveness; capability

McKinnon 200125§



Cranney et al 199922*


Cheater et al 200623*



Si et al 200728*

Effectiveness; responsiveness; capability

Ornstein et al27

Effectiveness; capability

Baker 200321*


National level benchmarking and incentive-setting, regional network support, and support for practice-level organisation using targeted feedback

Efficiency; capability

Malcolm et al 200132§

Efficiency; capability

Effectiveness; safety

Beilby et al 200631§

* Chronic disease management. Complex care (eg, care of older people, mental health). Curative services in developing countries. § Prescribing practice.

  • Christine B Phillips1
  • Christopher M Pearce2
  • Sally Hall1
  • Joanne Travaglia3
  • Simon de Lusignan4
  • Tom Love5
  • Marjan Kljakovic1

  • 1 Academic Unit of General Practice and Community Health, Medical School, Australian National University, Canberra, ACT.
  • 2 Melbourne East General Practice Network, Melbourne, VIC.
  • 3 Centre for Clinical Governance and Health, Faculty of Medicine, University of New South Wales, Sydney, NSW.
  • 4 Division of Community Health Sciences, St George’s — University of London, London, UK.
  • 5 Department of General Practice, Wellington School of Medicine, University of Otago, Wellington, New Zealand.


Our research was funded by a grant from the Australian Government Department of Health and Ageing through the Australian Primary Health Care Research Institute (APHCRI). We are grateful to members of the national and international reference groups for their advice, and to Stefanie Webb and Friedrich Dengel for research support. The opinions expressed in this article are not the opinions of APHCRI or the Department of Health and Ageing.

Competing interests:

Sally Hall is a member of the Board of APHCRI.

  • 1. Government of Western Australia. Department of Health. Office of Safety and Quality in Healthcare. Introduction to clinical governance — a background paper. Information Series No. 11. Perth: OSQH, 2001. (accessed Oct 2010).
  • 2. Braithwaite J, Travaglia J. An overview of clinical governance policies, practices and procedures. Aust Health Rev 2008; 32: 10-22.
  • 3. Makeham MAB, Kidd MR, Saltman DC, et al. The Threats to Australian Patient Safety (TAPS) study: incidence of reported errors in general practice. Med J Aust 2006; 185: 95-98. <MJA full text>
  • 4. Makeham MA, Stromer S, Bridges-Webb C, et al. Patient safety events reported in general practice: a taxonomy. Qual Saf Health Care 2008; 17: 53-57.
  • 5. Royal Australian College of General Practitioners. RACGP standards for general practice. Draft 4th edition for consultation. Melbourne: RACGP, 2010. (accessed Apr 2010).
  • 6. Victorian Healthcare Association. Clinical governance in community health project. (accessed Apr 2010).
  • 7. Australian Government Department of Health and Ageing. Towards a 21st century primary health care system: a draft of Australia’s first national primary health care strategy, 2009. Canberra: DoHA, 2009. (accessed 27 Apr 2010).
  • 8. National Health and Hospitals Reform Commission. A healthier future for all Australians — final report June 2009. (accessed Sep 2010).
  • 9. Pawson R, Tilley N. Evidence-based policy: a realist perspective London: Sage, 2006.
  • 10. Shepperd S, Lewin S, Strauss S, et al. Can we systematically review studies that research complex health interventions? PLoS Med 2009; 6: e1000086.
  • 11. Wright J, Hill P. Clinical governance. London: Churchill Livingstone, 2003.
  • 12. Scally G, Donaldson LJ. Clinical governance and the drive for quality improvement in the new NHS in England. BMJ 1998; 317: 61-65.
  • 13. Baker R, Lakhani M, Fraser R, Cheater F. A model for clinical governance in primary care groups. BMJ 1999; 318: 779-783.
  • 14. Malcolm L, Mays N. New Zealand’s Independent Practitioner Associations: a working model of clinical governance in primary care? BMJ 1999; 319: 1340-1342.
  • 15. Braine ME. Clinical governance: applying theory to practice. Nurs Stand 2006; 20: 56-65.
  • 16. Shapiro J, Smith S. Lessons for the NHS from Kaiser Permanente. BMJ 2003; 327: 1241-1242.
  • 17. Institute of Medicine. Crossing the quality chasm: a new health system for the 21st century. Washington, DC: National Academies Press, 2001.
  • 18. National Health Performance Committee. National Health Performance Framework Report. Brisbane: Queensland Health, 2001.
  • 19. von Elm E, Altman DG, Egger M, et al, STROBE Initiative. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Bull World Health Organ 2007; 85: 867-872.
  • 20. Campbell S, Steiner A. Is the quality of care in general medical practice improving? Results of a longitudinal observational study. Br J Gen Pract 2003; 53: 298-304.
  • 21. Baker R, Fraser RC, Stone M, et al. Randomised controlled trial of the impact of guidelines, prioritized review criteria and feedback on implementation of recommendations for angina and asthma. Br J Gen Pract 2003; 53: 284-291.
  • 22. Cranney M, Barton S, Walley T. Addressing barriers to change: an RCT of practice-based education to improve the management of hypertension in the elderly. Br J Gen Pract 1999; 49: 522-526.
  • 23. Cheater FM, Baker R, Reddish S, et al. Cluster randomized controlled trial of the effectiveness of audit and feedback and educational outreach on improving nursing practice and patient outcomes. Med Care 2006; 44: 542-551.
  • 24. Fraser S, Wilson T, Burch K, et al. Using collaborative improvement in a single organisation: improving anti-coagulant care. Int J Health Care Qual Assur 2002; 15: 152-158.
  • 25. McKinnon M, Townsend J, Cooper J, et al. Systematic review and clinical governance in repeat prescribing in general practice. Prim Health Care Res Dev 2001; 2: 235-240.
  • 26. Landon BE. Effects of a quality improvement collaborative on the outcome of care of patients with HIV infection: the EQHIV study. Ann Intern Med 2004; 140: 887-896.
  • 27. Ornstein S, Nietert PJ, Jenkins RG, et al. Improving the translation of research into primary care practice: results of a national quality improvement demonstration project. Jt Comm J Qual Patient Saf 2008; 34: 379-390.
  • 28. Si D, Bailie RS, Dowden M, et al. Delivery of preventive health services to Indigenous adults: response to a systems-oriented primary care quality improvement intervention. Med J Aust 2007; 187: 453-457. <MJA full text>
  • 29. Bailie R, Si D, Dowden M, et al. Improving organizational systems for diabetes care in Australian Indigenous communities. BMC Health Serv Res 2007; 7: 67.
  • 30. Scott A, Coote W. Whither Divisions of General Practice? An empirical and policy analysis of the impact of Divisions within the Australian health care system. Med J Aust 2007; 187: 95-99. <MJA full text>
  • 31. Beilby J, Wutzke SE, Bowman J, et al. Evaluation of a national quality use of medicines service in Australia: an evolving model. J Eval Clin Pract 2006; 12: 202-217.
  • 32. Malcolm L, Barry M, MacLean I. Pharmaceutical management in ProCare Health Limited. N Z Med J 2001; 114: 283-286.
  • 33. Crampton P, Davis P, Lay-Yee R, et al. Does community-governed nonprofit primary care improve access to services? Cross-sectional survey of practice characteristics. Int J Health Serv 2005; 35: 465-478.
  • 34. Gene-Bardia J, Escaramis-Babiano G, Sans-Corrales M, et al. Impact of economic incentives on quality of professional life and on end-user satisfaction in primary care. Health Policy 2007; 80: 2-10.
  • 35. Catacutan AR. The health service coverage of quality-certified primary health care units in metro-Manila, the Philippines. Health Policy Plan 2006; 21: 65-74.
  • 36. Van Driel ML, Coenen S, Dirven K, et al. What is the role of quality circles in strategies to optimise antibiotic prescribing? A pragmatic cluster-randomised controlled trial in primary care. Qual Saf Health Care 2007; 16: 197-202.
  • 37. Wensing M, Broge B, Riens B, et al. Quality circles to improve prescribing patterns in primary medical care: what is their actual impact? J Eval Clin Pract 2004; 10: 457-466.
  • 38. Valk GD, Renders CM, Kriegsman DM, et al. Quality of care for patients with type 2 diabetes mellitus in the Netherlands and the United States: a comparison of two quality improvement programs. Health Serv Res 2004; 39: 709-726.
  • 39. Sheaf R, Rogers A, Pickard S, et al. A subtle governance: “soft” medical leadership in English primary care. Sociol Health Illn 2003; 25: 408-428.
  • 40. Thiru K, De Lusignan S, Sullivan F, et al. Three steps to data quality. Inform Prim Care 2003; 11: 95-102.
  • 41. Australian Government. A National Health and Hospitals Network for Australia’s future. Canberra: Australian Government, 2010.$FILE/NHHN%20-%20Full%20report.pdf (accessed Oct 2010).
  • 42. Dwyer J, O’Donnell K, Lavoie J, et al. The overburden report: contracting for Indigenous health services. Darwin: Cooperative Research Centre for Aboriginal Health, 2009. (accessed Oct 2010).
  • 43. McGregor W, Jabareen H, O’Donnell CA, et al. Impact of the 2004 GMS contract on practice nurses: a qualitative study. Br J Gen Pract 2008; 58: 711-719.
  • 44. Campbell SM, McDonald R, Lester H. The experience of pay for performance in English general practice: a qualitative study. Ann Fam Med 2008; 6: 228-234.
  • 45. Steel N, Maisey S, Clark A, et al. Quality of clinical primary and incentivised target payments: an observational study. Br J Gen Pract 2007; 57: 449-454.
  • 46. Jackson CL, Nicholson C, McAteer EP. Fit for the future — a regional governance structure for a new age. Med J Aust 2010; 192: 284-287. <MJA full text>
  • 47. Fitzgerald L, Ferlie E. Innovation in health care: how does credible evidence influence professionals? Health Soc Care Community 2003; 11: 219-228.
  • 48. O’Connor N, Paton M. “Governance of” and “governance by”: implementing a clinical governance framework in an area mental health service. Australas Psychiatry 2008; 16: 69-73.
  • 49. Couzos S, Murray R. Aboriginal primary health care: an evidence-based approach. 3rd ed. Melbourne: Oxford University Press, 2008.


remove_circle_outline Delete Author
add_circle_outline Add Author

Do you have any competing interests to declare? *

I/we agree to assign copyright to the Medical Journal of Australia and agree to the Conditions of publication *
I/we agree to the Terms of use of the Medical Journal of Australia *
Email me when people comment on this article

Online responses are no longer available. Please refer to our instructions for authors page for more information.