Procedures to select medical specialist trainees aim to predict which junior doctors will become the best specialists.
A 1998 review of Australian postgraduate selection processes recommended use of the principles of good assessment.
Australia has expertise in national procedures used by medical schools to select students for undergraduate and graduate courses, but little experience in national specialist training program selection.
A system for selection into postgraduate general practitioner training, based on a national “selection-centre” approach used in the United Kingdom, is being piloted in Australia.
Initial evaluation shows the piloted system to be feasible but further evaluation is needed.
Any selection-centre approach must be adapted to the Australian health care context and have the confidence of the trainees, the professional colleges, the training providers and the public.
Internationally, medical postgraduate training programs aim to admit junior doctors who, with training, will become specialists, capable of high-quality, safe and independent practice. Selection procedures aim to predict which candidates will become the best trainees or specialists and reject those who are likely to perform poorly. Recently, lack of professionalism has become a basis for rejection, an addition to the traditional basis of inadequate knowledge or clinical skills.1 So far, there is evidence only from North American settings that measures of professional behaviour in early medical school may predict which students will have, when qualified, problems with medical boards. More pragmatically, in a United Kingdom setting, the stated aim of the process of selecting postgraduate trainees is to predict trainability, that is, identify individuals who will successfully complete training.2 Generally, the selection process aims to decide a minimum entry standard and, where training places are limited, to list candidates according to how well they rank until the number of allocatable places is exhausted. Any selection system has to be precise enough to discriminate among candidates of similar ability and prevent type 1 errors (excluding an individual who would be successful) or type 2 errors (including an individual who may not succeed).3 It must also provide a common, unbiased means of assessing candidates of diverse economic, personal, cultural and academic backgrounds. This is particularly important because the profile of international medical graduates (IMGs), who now comprise over 25% of Australia’s medical workforce,4 is undergoing change. As of 2007, half of the IMGs working in rural positions had trained in non-Western medical schools and worked previously in non-Western medical settings.5
In 1998, a review of postgraduate selection processes in Australia found considerable variation in the practices of the then professional colleges and emphasised that some graduates thought selection processes were unfair.6 The characteristics that the reviewers recommended for selection processes are recognisable as those of any good assessment system:7 clearly stated purpose and objectives; reliable, valid and feasible selection tools; transparent decision making; and incorporation of quality assurance procedures, including for selector training. There has been little progress so far in establishing such a framework nationally, but interesting pilots are underway. One such pilot is that being run by General Practice Education and Training Limited (GPET), which manages the Australian General Practice Training program on behalf of the Australian Government. Since 2000, GPET has established a regionalised system of general practice education and training, delivered by 17 regional training providers across Australia. Having recognised the importance and complexity of deciding who qualifies for the limited number of general practice postgraduate training places, GPET determined that the selection process should be fair8 and based on both national and international best practice.
Australia has considerable expertise in its large-scale selection procedures for entry into both undergraduate and graduate medicine. These selection procedures commonly comprise “cognitive” and “non-cognitive” assessments; the former focus on prior academic achievement and written aptitude tests, the latter on determining, by means of interview and personal statement, the values and personal characteristics of applicants.9 These arrangements have evolved through continuous review, quality assurance evaluation and scientific scrutiny to ensure they are rigorous and fair. Written aptitude tests are designed to increase predictive capability incrementally when used with assessments of prior academic achievement and selection interviews. However, medical schools have reported disappointing predictive correlations between, for example, the Graduate Australian Medical School Admissions Test (GAMSAT), the written aptitude test used in Australia, Ireland and the UK, and in-course assessments.10
The “non-cognitive” characteristics of candidates for undergraduate and graduate courses are being increasingly measured in Australia by the multiple mini-interview (MMI),11 introduced from Canada12 and derived from the Objective Structured Clinical Examination (OSCE). The MMI avoids the problems of the traditional interview where much of the candidate’s mark reflects the limitations of the single interview format and the biases of members of the interview panel. In an MMI, more reliable generalisations about a candidate’s capability can be made because the interview content is broader and the candidate has several interviews, each undertaken by a different and independent interviewer. MMIs are designed to give due weight to the non-cognitive characteristics preferred by the medical (and dental) schools for entry-level students, such as integrity and aptitude for teamwork and lifelong learning.11 It has been shown that both clerkship and specialty examination performance can be predicted from MMI results.13
Of interest to GPET was that the MMI had been successfully adapted in medical postgraduate training overseas to, for example, assess the aptitude of IMG applicants for family medicine residency training in Canada14 and the performance of junior doctors selected for a paediatric regional training program in the UK.15
Australia has little expertise in national “programmatic” (multi-tooled) postgraduate selection and, therefore, GPET has looked overseas to the “selection-centre” approach for choosing entrants into its programs. This approach has been developed by organisational psychologists and personnel experts as a method to recruit for managerial jobs initially, but more recently for a wide range of non-managerial jobs. At its heart, a selection centre allows a candidate to participate in various simulations and be assessed by many trained assessors,16 two features that it shares with the OSCE. The purpose of using a selection centre for medical trainee selection is to assess, not only clinical competency, but the candidate’s entry-level capability to perform as a trainee or registrar, including his or her ability to solve clinical problems and professional dilemmas.17 Of course, no single-test format such as OSCE or MMI is able to assess all aspects of clinical competence, and similarly would not be expected to assess trainability. The testing of relevant knowledge, including aspects of making clinical and professional decisions, can be more feasibly (and cheaply) tested with a multiple choice question format, for example, the Situational Judgement Test (SJT). The SJT has been claimed to have predictive validity, as well as reliability and criterion-related validity, following a UK-wide pilot (2552 participants).18
Patterson and her team in the UK have developed a “selection centre” approach focusing on a combination of written SJTs and observed OSCE-style simulation stations — their research19 has been used to develop UK selection policy and inform a Medical Specialty Training recruitment process. The system is highly developed for selecting candidates for GP training, borne out by the formation of the National Recruitment Office for General Practice Training,20 which uses a nationally approved job specification for GP registrars. Candidates assessed as eligible for training from their initial applications sit an SJT to test their ability to solve clinical problems and resolve professional dilemmas. Topics are taken from areas that should be familiar to junior doctors (postgraduate years 1 and 2). Candidates who pass the first assessment are invited to a selection assessment centre (SAC), usually operated by their first-choice training provider. At the SAC, they complete three exercises observed and assessed by trained assessors and consisting of a patient-simulation exercise, a group exercise and a further written SJT.
In Australia, GPET’s pilot for a national selection system, based on the selection-centre approach, requires eligible candidates to take a 50-item written assessment (SJT) and an observed four-station MMI (five stations for those on a rural training pathway). Scores are combined with ratings from referee reports on the candidate’s prior experience. In developing this integrated assessment, GPET, in association with the Royal Australian College of General Practice and the Australian College of Rural and Remote Medicine, determined the domains of practice in which entry-level registrars should be competent and assessed. The practice domains included communication, clinical skills, population health, professionalism, organisational areas and personal attributes (eg, the capacity for self-reflection and awareness of the impact of culture on patients’ primary health needs). The pilot involved 344 of the 1200 candidates interviewed nationally for 900 training places and took place in two test centres, one in Sydney, the other in Melbourne.
Initial evaluation of the trial confirmed that the selection-centre approach is feasible. In the future, the procedures’ robustness will be investigated and a set of quality assurance standards developed. A particular challenge is to find a way to combine the various selection tools (SJT, MMI and referee reports) that makes the process credible for all involved and able to genuinely predict the performance of registrars and trainees in both rural and urban settings.
The assessment process must be fair, reliable and valid, with an acceptable demand on resources. While there are lessons to be learned from the UK approach, it is also important that any selection procedure is adapted to the Australian health care situation and has the confidence of the registrars and trainees, the professional colleges, the training providers and the public. This is a fruitful area for research and policy development across the professional colleges.
- 1. Papadakis MA, Teherani A, Banach MA, et al. Disciplinary action by medical boards and prior behaviour in medical school. N Engl J Med 2005; 353: 2673-2682.
- 2. Patterson F, Ferguson E, Norfolk T, Lane P. A new selection system to recruit general practice registrars: preliminary findings from a validation study. BMJ 2005; 330: 711-714.
- 3. Roberts C, Walton M, Rothnie I, et al. Factors affecting the utility of the Multi-Mini-Interview in selecting candidates for graduate entry medical school. Med Educ 2008; 42: 396-404.
- 4. Lennon B. International physician migration — the push for self-sufficiency. Proceedings of the 9th International Medical Workforce Collaborative Conference; 2005 Nov 15-19; Melbourne, Australia.
- 5. Birrell B, Schwartz A. Assessment of overseas-trained doctors — the latest chapter. People Place 2007; 15: 67-76.
- 6. Brennan P. Trainee selection in Australian medical colleges. Canberra: Medical Training Review Panel, Commonwealth Department of Health and Family Services, 1998.
- 7. Crossley J, Humphris G, Jolly B. Assessing health professionals. Med Educ 2002; 36: 800-804.
- 8. Australian General Practice Training. New applicants. http://www.agpt.com.au/ApplyforAGPT/NewApplicants/ (accessed Nov 2010).
- 9. Roberts C, Prideaux D. Selection for medical schools: re-imagining an international discourse. Med Educ 2010. In press.
- 10. Wilkinson D, Zhang J, Byrne GJ, et al. Medical school selection criteria and the prediction of academic performance. Evidence leading to change in policy and practice at the University of Queensland. Med J Aust 2008; 188: 349-354. <MJA full text>
- 11. Kumar K, Roberts C, Walton MC, et al. Candidate and interviewer experiences of the Multiple-Mini-Interview (MMI) process for entry into graduate medical school: a framework analysis. Med Educ 2009: 43; 360-367.
- 12. Eva KW, Rosenfeld J, Reiter HI, Norman GR. An admissions OSCE: the multiple mini-interview. Med Educ 2004; 38: 314-326.
- 13. Reiter HI, Eva KW, Rosenfeld J, Norman GR. Multiple mini-interviews predict clinical clerkship and licensing examination performance. Med Educ 2007; 41: 378-384.
- 14. Hofmeister M, Lockyer J, Crutcher R. The multiple mini-interview for selection of international medical graduates into family medicine residency education. Med Educ 2009; 43: 573-579.
- 15. Humphrey S, Dowson S, Wall D, et al. Multiple mini-interviews: opinions of candidates and interviewers. Med Educ 2008; 42: 207-213.
- 16. Lievens F, Thornton GC. Assessment centers: recent developments in practice and research. In: Evers A, Anderson N, Voskuijl O, editors. The Blackwell handbook of personnel selection. Malden: Blackwell Publishing, 2005: 243-263.
- 17. Patterson F, Ferguson E, Lane P, et al. A competency model for general practice: implications for selection, training, and development. Br J Gen Pract 2000; 50: 188-193.
- 18. Patterson P. A review of entry and selection mechanisms: research evidence, key concepts and opportunities. Proceedings of the 2nd Annual Scientific Meeting of the Irish Network of Medical Educators; 2009 Feb 4; Galway, Ireland.
- 19. Patterson F, Baron H, Carr V, et al. Evaluation of three short-listing methodologies for selection into postgraduate training in general practice. Med Educ 2009: 43: 50-57.
- 20. National Health Service [UK]. Medical speciality training (England). http://www.mmc.nhs.uk/medical_education.aspx (accessed Oct 2010).
Publication of your online response is subject to the Medical Journal of Australia's editorial discretion. You will be notified by email within five working days should your response be accepted.