Connect
MJA
MJA

A systematic approach to workplace-based assessment for international medical graduates

Balakrishnan R Nair, Michael J Hensley, Mulavana S Parvathy, Deborah M Lloyd, Brooke Murphy, Kathy Ingham, Julie M Wein and Ian M Symonds
Med J Aust 2012; 196 (6): 399-402. || doi: 10.5694/mja11.10709
Published online: 2 April 2012

Over the past few years, awareness of the need for workplace-based assessment (WBA) of junior doctors has increased. This has resulted in the development of many assessment tools. The most popular, researched and validated tools are mini-clinical evaluation exercises (mini-CEXs), case-based discussions (CBDs), in-training assessments (ITAs) and multisource feedback (MSF), such as 360º assessments.1-4

There are many reasons for WBA development:

WBA should be conducted using multiple tools, since one tool cannot assess all domains, but there are many challenges to implemention. The teaching faculty may not engage in assessment, provide constructive feedback or monitor the impact of feedback on performance.7 WBA is also costly in time and resources.

Australia, like many other developed countries, relies on international medical graduates (IMGs) for service provision. In some locations up to 30% of doctors are IMGs.8 To gain general medical registration in Australia, the IMGs currently have to pass a multiple-choice examination, followed by a clinical skills examination, both of which are conducted by the Australian Medical Council (AMC). However, there is a long wait to undertake these exams. The AMC clinical exam is a 16-station, multidisciplinary, objective, structured exam.9 Each station is 8 minutes’ duration. One or two stations may include real patients with clinical signs, and the other stations use role players. A clear pass score is 12 or more stations passed, including at least one obstetrics and gynaecology station and one paediatrics station. Marginal performance is a pass in 10 or 11 stations; these candidates are offered a remedial examination. The average pass rate for the AMC clinical exam is around 50% (Ian Frank, Chief Executive Officer, AMC, personal communication, 24 January 2011). The AMC have been proactive in evaluating other assessment methods, and we were invited to submit an expression of interest to conduct WBA.

In 2010, the University of Newcastle School of Medicine and Public Health, in collaboration with Hunter New England Area Health Service, were accredited to conduct WBA for IMGs in place of the AMC clinical exam. The purpose was to test whether WBA is feasible in the workplace and whether it is acceptable to IMGs and assessors.

Methods

Ethics approval for the study was obtained from the Hunter New England Human Research Ethics Committee. All candidates gave written, informed consent before the study started.

WBAs were conducted over 6 months from June 2010 at four teaching hospitals in Newcastle. IMGs invited to participate were those who worked in these hospitals, had passed the English language requirements and the AMC multiple-choice exam, and were on the waiting list for the AMC clinical exam. The IMGs were working in medicine, surgery, emergency, paediatrics, mental health and obstetrics and gynaecology as junior medical officers. They were from India, Pakistan, Bangladesh and China, and had been working in Australia for 1–10 years. There were two calibration sessions attended by 65 assessors, and an information evening attended by all 27 candidates, to familiarise themselves with the protocols, assessment methods and tools.

The assessment tools used were mini-CEXs, CBDs, MSF and ITAs (Box 1). The mini-CEXs and CBDs were “blueprinted” (assessment tasks mapped out to align with course content and desired learning outcomes, so that each prescribed discipline and domain is represented) as outlined in Box 1. All assessors used well validated, individually labelled assessment forms (assessment forms are available from the authors). Candidates received immediate feedback from examiners on the mini-CEX and CBD assessments. Pass criteria for each assessment tool were predetermined by an expert panel and are summarised in Box 2.

Candidates had to pass eight of 12 mini-CEX cases, with a minimum of one pass in each of the six clinical disciplines. With a pass in seven mini-CEXs, candidates were deemed “borderline” and given a supplementary mini-CEX in the same discipline and domain by two assessors. Each mini-CEX lasted about 30 minutes, including time for feedback. The blueprinted items were marked “critical” on the assessment forms; this was emphasised to assessors and candidates.

Candidates participated in seven CBDs and had to pass a minimum of four. The two clinical case patients were selected from patients whose care the candidates had managed. On two occasions, the candidates had to choose three cases and the assessor chose one from these three cases for the assessment. Many candidates worked only in one specialty (eg, medical registrar, psychiatry registrar).

To provide sufficient breadth to the assessment, the five other CBDs were paper cases, developed by a clinical panel from real cases. The topics were selected from the AMC syllabus and were common clinical scenarios, pitched at the intern level. For the paper cases, candidates were given 10 minutes to read about the case before the assessment. Each CBD assessment, including the feedback, took 20–30 minutes.

For the MSF assessment, candidates were asked to nominate 12 colleagues from medical, nursing and other staff (eg, a ward clerk) who could provide feedback on the candidate’s communication, teamwork and professional skills. The project team chose 10 of the nominated colleagues at random from the list of 12 (to maintain anonymity) and these 10 were sent the MSF assessment form. Candidates also completed a self-assessment form. The feedback was reviewed by a committee of medical, nursing and allied health professionals.

The first MSF (360°) assessment was performed in month 1 and the second in month 6. The first was used formatively, and candidates were given guidance, remediation and mentoring as needed. To pass the second (summative) MSF assessment, a mean score of ≥ 3 was needed in all domains (Box 2).

The ITA form used is the regular end-of-term assessment tool used by postgraduate training bodies and colleges. Two ITAs were required during the 6-month assessment period, and candidates needed an overall performance score of at least “at expected” (or equivalent) on both assessments.

Before WBA started, all candidates and assessors were given resource kits with assessment criteria, blueprinted forms and other relevant information. Each mini-CEX and CBD assessment stipulated the discipline and clinical skills to be assessed. Assessors were responsible for selecting an appropriate patient for each mini-CEX assessment. The schedule of assessments was centrally programmed to ensure compliance and proper sampling (so that history, physical examination and counselling, etc, are well represented in all disciplines).

The project was staffed by a full-time project officer, a part-time educational consultant and a part-time project manager with the academic team. Assessors were recruited by inviting interested clinicians to participate, with an honorarium offered per session. The project officer facilitated scheduling of assessments, and assessment tasks had a completion deadline which allowed for clinical emergencies. This flexibility enabled completion of all assessments within designated timeframes.

Feedback was provided by assessors at a focus group and through emails, and candidates provided feedback at a focus group with semi-structured verbal questions followed by a questionnaire.

Results

At the end of the assessment period, all 27 candidates were successful according to the predetermined criteria, even though some had been deemed “unsatisfactory” in individual assessments (Box 3).

Of 27 candidates, 22 responded to the feedback questionnaire, and indicated that WBA was more realistic than the current format of the AMC clinical examination. The candidates felt that the WBA did not interfere with their clinical duties. Most of the candidates reported that they had received “constructive feedback”. One candidate wrote that the “program was really good in stimulating learning and appropriate as a tool of assessment.” Some candidates thought specialists should not be assessing in their field, but noted that “80% of the assessors were very appropriate”. “Better standardisation of examiners [needed]” and “discussion pitched at senior level” were two of the problems identified.

The feedback process was a recurring theme with assessors; “direct feedback on the spot ... and it is now recognised in management and health care that we need to begin to do that”. One assessor said “we have to be prepared, in giving feedback, that it may be uncomfortable”. Another assessor thought “it is more painful if you do not give feedback because the person does not get the chance to improve”. The process “did not under- or overassess someone’s abilities” because of multiple assessors and multiple tools. Assessors felt the candidates had improved over the assessment period.

Some of the issues raised by the assessors included the need to widen the pool of assessors, medical emergencies interfering with assessments, and assessment fatigue. However, in a subsequent focus group, the assessors indicated they were prepared to do more assessments and appreciated the training received.

Discussion

This is the first time that WBA has been used for summative assessment in Australia and is the first use of four tools concurrently to assess performance. We show that summative WBA is feasible, provided there are committed project teams and assessors. It is intensive in use of time and resources. The assessment time for mini-CEXs and CBDs (the main clinical assessment tools), including feedback, was about 10 hours per candidate. It is difficult to quantify the exact time allocated to MSF and ITA, as this occurred over time. The program had components of an:

As has been discussed by Murphy and colleagues,11 multiple assessment formats provide a sound basis for assessment. We believe our assessment tools and program meet the criteria of validity, educational impact and acceptability. We used multiple, validated tools to test the clinical skills, professionalism and clinical reasoning of the candidates. The educational impact was positive, judging by comments from both groups. All assessors felt that the candidate performance had improved over the 6-months, and candidates thought the feedback helped their performance. The reliability of individual and multiple assessments and their validity12 will be presented in a future article. If one believes that “the most important factor in learning is usually the quality of the feedback on performance”,13 then this program can be judged successful. Most candidates reported that this was the first time they had received immediate constructive feedback on their performance.

While the AMC exam has a 50% pass rate, the 100% pass rate in this trial is not surprising. It may be due to the fact that all the WBA candidates were employed as doctors, with access to the health system enabling them to improve their knowledge, skills and performance. Not all candidates undertaking the AMC clinical exam have this advantage.

Even though the original methodology of CBDs is based on real patients, we only used two real-patient CBDs for each candidate. The five paper-based CBDs added breadth to the assessment since many of the candidates were working in one specialty. Our CBDs differ from the conventional CBDs for this reason.

A key lesson to be learnt from this project is that assessor engagement is required for a successful WBA program. The issue of ongoing assessment by the small cohort of willing assessors will be a challenge in the long term. Some assessors were not comfortable providing feedback, especially when the candidate was not satisfactory.

We do not know the exact financial and opportunity costs of this program yet, and calculations are underway. We estimate an approximate cost of $6000 per candidate over the 6-month period.

Although previous researchers have provided data on the individual tools we used in our assessment, we do not know the minimum number of assessments necessary to obtain valid and reliable data when multiple tools are being used.

Overall, this has been a rewarding but challenging program for all. We learnt several important lessons about implementing WBA for IMGs. Feedback from the trainees and assessors indicates that this has been a worthwhile program from both the assessment and educational point of view.

Received 10 June 2011, accepted 11 August 2011

  • Balakrishnan R Nair1,2
  • Michael J Hensley3
  • Mulavana S Parvathy2
  • Deborah M Lloyd2
  • Brooke Murphy2
  • Kathy Ingham2
  • Julie M Wein2
  • Ian M Symonds1

  • 1 University of Newcastle, Newcastle, NSW.
  • 2 Centre for Medical Professional Development, University of Newcastle, Newcastle, NSW.
  • 3 Department of Respiratory and Sleep Medicine, John Hunter Hospital, Newcastle, NSW.


Correspondence: kichu.nair@newcastle.edu.au

Acknowledgements: 

We thank Ian Frank, Heather Alexander and Gordon Page for support and advice. The project was funded by the AMC and the Commonwealth Department of Health and Ageing.

Competing interests:

No relevant disclosures.

  • 1. Norcini J, Burch V. Workplace-based assessment as an educational tool: AMEE Guide No. 31. Med Teach 2007; 29: 855-871.
  • 2. Nair BR, Alexander HG, McGrath BP, et al. The mini clinical examination exercise (mini-CEX) for assessing clinical performance of international medical graduates. Med J Aust 2008; 189: 159-161. <MJA full text>
  • 3. Solomon DJ, Reinhart MA, Bridgham RG, et al. An assessment of an oral examination format for evaluating clinical competence in emergency medicine. Acad Med 1990; 65: S43-S44.
  • 4. Murphy DJ, Bruce DA, Mercer SW, Eva KW. The reliability of workplace-based assessment in postgraduate medical education and training: a national evaluation in general practice training in the United Kingdom. Adv Health Sci Educ Theory Pract 2009; 14: 219-232.
  • 5. Day SC, Grosso LJ, Norcini JJ Jr, et al. Residents’ perception of evaluation procedures used by their training program. J Gen Intern Med 1990; 5: 421-426.
  • 6. Veloski J, Boex JR, Grasberger MJ, et al. Systematic review of the literature on assessment, feedback and physicians’ clinical performance: BEME Guide No. 7. Med Teach 2006; 28: 117-128.
  • 7. Leinster SJ. Workplace-based assessment as an educational tool: guide supplement 31.2 — viewpoint. Med Teach 2009; 31: 1032.
  • 8. McLean R, Bennett J; Implementation and Technical Committees of the Australian Health Ministers’ Advisory Council. Nationally consistent assessment of international medical graduates. Med J Aust 2008; 188: 464-468. <MJA full text>
  • 9. Australian Medical Council. AMC Clinical Examination. Canberra: AMC, 2011. http://www.amc.org.au/index.php/ass/clinex (accessed Mar 2012).
  • 10. Hamdy H. AMEE Guide Supplements: Workplace-based assessment as an educational tool. Guide supplement 31.1 — viewpoint. Med Teach 2009; 31: 59-60.
  • 11. Murphy DJ, Bruce D, Eva KW. Workplace-based for general practitioners: using stakeholder perception to aid blueprinting of an assessment battery. Med Educ 2008; 42: 96-103.
  • 12. Van der Vleuten CPM. The assessment of professional competence: developments, research and practical implications. Adv Health Sci Educ 1996; 1: 41-67. doi: 1007/BF00596229.
  • 13. Eraut M. A wider perspective on assessment [comment]. Med Educ 2004; 38: 803-804.

Author

remove_circle_outline Delete Author
add_circle_outline Add Author

Comment
Do you have any competing interests to declare? *

I/we agree to assign copyright to the Medical Journal of Australia and agree to the Conditions of publication *
I/we agree to the Terms of use of the Medical Journal of Australia *
Email me when people comment on this article

Online responses are no longer available. Please refer to our instructions for authors page for more information.