Despite lack of quantitative evidence, accreditation still works, although there is room for improvement
Accreditation is, as a method of reviewing the quality of medical education, more than 100 years old1 and is traditionally based on the episodic “biopsy” model, which involves periodic assessment, usually against defined standards.2 Despite acknowledgement of the importance of accreditation,3,4 there is limited hard evidence to support its effectiveness or impact, reflecting in particular the social constructs of the accreditation model and of the education setting in which it is applied. Accreditation in this context assesses to a significant degree the quality of human interactions, obligating a less measurable research approach, more qualitative than quantitative.5 Applicable social constructivist research methodologies are common in the education setting, as detailed in the recent MJA article by Durning and Schuwirth.6 Although there is some quantitative evidence for the effectiveness of accreditation, such as better clinical learning climate survey scores in accredited programs,7 critical outcomes of impacts on health care have not been quantitated, and a causative link between accreditation and educational quality has not yet been clearly established.4 Further, ongoing changes and innovations in health care and medical education call for a flexible approach to accreditation design,4,8 so even if hard evidence was available, its applicability may be limited to specific and perhaps outdated settings.
Publication of your online response is subject to the Medical Journal of Australia's editorial discretion. You will be notified by email within five working days should your response be accepted.