Connect
MJA
MJA

The comparability of emergency department waiting time performance data

Jessica Greene and Jane Hall
Med J Aust 2012; 197 (6): 345-348. || doi: 10.5694/mja11.11246
Published online: 17 September 2012

Abstract

Objective: To examine whether the reported urgency mix of an emergency department’s (ED’s) patients is associated with its waiting time performance.

Design and setting: Cross-sectional analysis of data on patient urgency mix and hospital ED performance reported on the MyHospitals website for July 2009 – June 2010.

Main outcome measures: ED performance assessed as the proportion of patients whose care was initiated within the recommended time frame for each of four triage categories.

Results: Data for 158 hospitals showed that EDs with a higher proportion of patients assigned to the emergency category have poorer waiting time performance, after adjusting for hospital characteristics. Conversely, EDs with a higher proportion of patients assigned to the non-urgent category perform better. If performance scores were adjusted for reported patient urgency mix and hospital peer group, mean adjustments would be modest in size (3.7–7.1 percentage points, depending on the category), but for individual EDs the differences could be large (as large as 31 percentage points) and hospital waiting time performance rankings would be substantively impacted.

Conclusion: Since ED performance is related to reported patient urgency mix, adjusting for casemix in the ED may be warranted to ensure valid comparisons between hospitals. Further investigation of the validity of performance measures and appropriate adjustment for differences in hospital and patient characteristics is required if public reporting is to meet its goals.

One of the key goals of health reform is to drive hospital quality improvement by creating greater transparency in health services performance.1 With the launch of the MyHospitals website in December 2010, quality indicators for individual hospitals across Australia were publicly reported for the first time. The MyHospitals website began by detailing emergency department (ED) and elective surgery waiting time performance data for public hospitals, and has since added hospital rates of staphylococcal infection and waiting times for cancer surgery. Additional quality indicators will be added in the future. Our study focused on ED waiting times, which the website presents as the proportion of patients in each of five ED triage categories whose treatment began within recommended time frames. Previously, these statistics were only available at the state or territory level, and not for all jurisdictions.

Critics of the MyHospitals website have questioned the data quality and comparability.2,3 Concern over the comparability of ED waiting time data is not a new issue. Reports by the Productivity Commission and the Australian Institute of Health and Welfare (AIHW) have identified jurisdictional-level differences in the allocation of patients across the five ED triage categories, and have suggested that these differences may influence ED waiting time performance.4,5 If, in fact, reported ED patient urgency mix is associated with waiting time performance, then comparing performance of EDs that have different patient urgency mixes may be unfair.

We aimed to investigate the relationship between ED patient urgency mix and ED waiting time performance at the hospital level, using MyHospitals data. We also aimed to assess the variation among Australian hospitals in the assignment of ED patients to triage categories, and whether differences in the proportion of patients assigned to each category are associated with ED performance. Lastly, we aimed to determine the degree to which ED performance scores change when patient urgency mix and hospital size are taken into account.

Methods

Our study was a cross-sectional study using publicly reported Australian hospital-level data on the MyHospitals website. We recorded the number of patients assigned to each ED triage category and the proportion of patients in each ED triage category whose treatment began within the recommended time frame. Five triage categories are used by the MyHospitals website: resuscitation, emergency, urgent, semi-urgent, and non-urgent. These are the same categories used by the AIHW, and are based on the National Triage Scale (NTS).6,7 The recommended time frames used by MyHospitals (emergency, ≤ 10 min; urgent, ≤ 30 min; semi-urgent, ≤ 60 min; non-urgent, ≤ 2 h) are the same as the NTS and the updated Australasian Triage Scale8 except for the resuscitation category, to which MyHospitals assigned a measurable time frame of “receipt of care within 2 minutes” rather than “immediate”.6 We obtained data for July 2009 to June 2010 from the MyHospitals website.

Our analysis included all hospitals reporting ED performance data, with the exception of 25 hospitals with too few resuscitation cases to report full statistics.

Statistical analyses

We first used Pearson correlation tests to examine the bivariate associations between allocation of patients to the five triage groups and ED performance. We excluded performance for the resuscitation category because there was very little variation (97% of EDs reported meeting the guidelines 95% or more of the time), the distribution was skewed, and it was not linearly related to the allocation of patients.

We then developed multivariate regression models that adjusted for differences in hospital characteristics. The dependent variables were ED performance (the proportion of patients whose care was initiated within the recommended time frame) for four of the five triage categories. The independent variables were the proportions of patients assessed as being in the emergency and non-urgent categories. We used two triage categories because of the high correlation among the five triage categories, which created multicollinearity problems. We chose the emergency and non-urgent categories because they showed consistently strong bivariate correlations with the dependent variables.

The control variables in the models included the socioeconomic status and accessibility category of the community surrounding the public hospital, and the size of the hospital. Socioeconomic status was measured using the Socio-economic Index for Areas (SEIFA) Index of Relative Socio-economic Advantage and Disadvantage9 based on the postcode of the hospital. Higher values indicate more advantage. The accessibility of the hospital was measured using the Accessibility/Remoteness Index of Australia (ARIA).10 Hospital peer group was used to indicate the type of hospital. We compared combined peer groups A and B (principal referral, specialist women’s and children’s, and large hospitals) with smaller hospitals, since no differences in performance were detected between peer group A and B hospitals.

Regression model assumptions were checked and were acceptable. The relationships between the independent and dependent variables were assessed for linearity by inspection of scatter plots. The equality of variance of the residuals was tested by inspection of the residuals versus predicted values, and by White’s test. Multicollinearity was assessed with variance inflation factors, and models were confirmed excluding observations identified by Cook’s distance as potentially influential.

To evaluate the impact of adjusting performance scores, we computed an expected performance score for each hospital in each triage category, assuming the hospital had the median triage percentages and that the hospital was peer group A or B (we used the hospitals’ actual SEIFA and ARIA levels). Since we were comparing the actual performance (predicted performance based upon the regression equation plus the residual) with the performance we would expect under the median triage percentage in a peer group A or B hospital, we computed the expected scores using the coefficients from the models as well as the residuals. The difference between the expected and observed was therefore only due to the change in triage percentage and hospital type, and not due to the fit of the model. We then compared the absolute value of the difference between the expected and the observed performance. We used Stata, version 11 (StataCorp) to conduct all analyses, and used a significance level of P < 0.05.

Results

Our analysis included 158 hospitals. These were all public hospitals, except for two private hospitals in Queensland that provided services to public patients. Most hospitals (113) were peer group A or B hospitals, while the remainder (45) were smaller hospitals.

Variation in triage patterns and ED performance

On average, 0.6% of ED patients were assigned to the top urgency triage category, resuscitation (Box 1). There were only small differences between the hospitals in allocation of patients to this category (range, 0.1%–4.0%). In contrast, there was substantial variation between hospitals in allocating patients to the other four triage categories. While hospitals, on average, assessed 8% of ED patients as being in the emergency category, the proportions ranged over 22 percentage points. The ranges for the three less serious categories were more than twice as large, at 45, 50, and 62 percentage points for urgent, semi-urgent, and non-urgent, respectively.

There was also substantial variation in ED waiting time performance (Box 1). The range in performance was 59 percentage points for patients assigned to the emergency category. The range was even larger for the urgent and semi-urgent categories — 66 and 62 percentage points, respectively — and smaller for resuscitation and non-urgent, at 25 and 30 percentage points, respectively.

Associations between triage patterns and ED performance

The correlations in Box 2 show that EDs that allocated more patients to the three most urgent triage categories had poorer overall performance. Conversely, the greater the proportion of non-urgent patients, the better the ED’s emergency, urgent, and semi-urgent performance.

Multivariate regression models that controlled for differences in hospital characteristics showed an association between higher proportions of patients assigned to the emergency category and poorer waiting time performance in the emergency, urgent, and non-urgent triage categories (Box 3). For instance, for every increase of two percentage points in the proportion of patients assigned to the emergency category, performance for the emergency triage category was about one percentage point lower. In addition, the greater the proportion of patients triaged to the non-urgent category, the better the performance for the emergency, urgent, and semi-urgent categories.

The multivariate models also indicated that hospitals located in higher socioeconomic areas had better performance for the emergency triage category than those in lower socioeconomic areas, though there was no relationship between socioeconomic status and performance for the other triage categories (Box 3). Small hospitals had better performance for the urgent, semi-urgent, and non-urgent triage categories than did peer group A and B hospitals (5.9, 9.7, and 6.1 percentage points, respectively).

Adjusting for triage percentages and peer group

We investigated the impact on performance scores of standardising triage proportions and hospital type (Box 4). Based on the regression results, if each hospital were to have the median triage percentages and were peer group A or B, performance scores would be expected to change on average by 3.7, 7.1, and 6.2 percentage points for the emergency, urgent and semi-urgent categories, respectively. While the mean adjustments were modest in size (and are smaller for the non-urgent category), the ranges were wide (as large as 31 percentage points).

Discussion

The results of our study suggest that better performance by EDs in meeting waiting time criteria is related to the reported urgency mix of the EDs’ patients. The data for 158 Australian EDs in 2009–10 showed that those reporting a disproportionately large percentage of emergency patients had poorer performance than those reporting smaller proportions. We also found that EDs reporting proportionally more non-urgent patients had better performance than those reporting fewer non-urgent patients; and, related to this, smaller hospitals, which have more non-urgent patients, performed better than larger hospitals in the three less urgent triage categories. These results raise questions about the comparability of the current Australia-wide performance reporting methods.

The policy goals of publicly reporting hospital quality indicators are to provide greater public accountability, spark hospital quality improvement efforts that would lift the standards of hospitals across the country, and provide consumers with information for making informed choices about their health care.11,12 To achieve these goals, hospital performance data must be accurate and comparable.

One explanation for the patterns shown in the study is that it may be more difficult operationally to ensure that ED patients are treated within the recommended time frames when patients need treatment very quickly. Since patients allocated to higher urgency categories are more likely to be admitted to the hospital, one potential cause of lower waiting time performance among EDs treating highly urgent patients may be access block.13 Conversely, it may be easier for an ED to meet the recommended guidelines for initiating treatment when its patients need treatment to commence within a longer time frame. If true, then hospitals do not face a level playing field when being assessed on ED performance, and EDs with higher proportions of more urgent patients are disadvantaged under the current reporting system.

Another potential explanation for our findings relates to assigning patients to a less urgent triage category than is appropriate. Such “undertriaging” gives EDs a longer recommended time frame for initiating treatment, which would be likely to translate into better performance. If there are EDs that routinely allocate “true” emergency patients to the urgent category, their performance scores would probably be inflated. Our study does not provide any evidence that undertriaging is taking place in EDs. It is a possibility worthy of consideration, however, since there has been documented gaming of other hospital performance indicators in Australia, and of ED performance overseas.14,15

Performance scores that have been adjusted according to the urgency mix of patients and the size of the hospital are fair comparisons, and they are not dependent on which of the above explanations is driving the observed relationship between urgency mix and performance. Our findings suggest that, while on average such adjustments would be modest in size, they could have a substantive impact on hospital rankings. For example, of the 10 hospitals with the highest performance scores for the emergency triage category, only six would remain in the top 10 when adjusted performance scores were used. Fewer than half (4 and 3, respectively) would remain in the top 10 for performance in the urgent and semi-urgent triage categories if adjusted scores were used. It is important to ensure, however, that any adjustment of scores is limited to accounting for factors that causally affect performance and are not the result of confounding variables. If, for example, EDs with more emergency patients had worse performance because they attracted managers who were less skilled in managing high-demand situations and accepted long patient waiting times as immutable, then fully adjusting performance scores would in essence excuse poorer management. More investigation of the factors that affect ED performance is required.

Future research on the relationship between patient urgency and waiting time would benefit from analysis of patient-level data, which would allow controlling for differences in patient demographics and health status. This was not possible with the hospital-level data used in this study. It is also noteworthy that the measures currently used to assess ED performance are not validated.16 In other words, it is unknown whether patient health outcomes are worse in EDs that less consistently meet the recommended time frames for initiating patients’ treatment. Developing this evidence base or identifying alternative evidence-based performance metrics is important for creating a public reporting scheme that is trusted and respected by the medical profession.

Our study also contributes to the literature on equity in health care in Australia. Overall, the results suggest that hospitals have similar ED waiting time performance in areas of high compared with low socioeconomic status, and in urban compared with rural areas. There was, however, one exception. Performance in the emergency triage category was worse in areas of lower socioeconomic status. Future research should monitor these trends and further examine whether there are differences in ED waiting times for patients of differing socioeconomic status within hospitals.

In conclusion, our study highlights the challenge of publicly reporting hospital quality data. We found that the current ED performance metrics may be biased in favour of EDs that report fewer urgent patients. Adjusting performance scores for variation in patient and hospital characteristics could ameliorate this bias. This type of adjustment is considered crucial for the viability of patient-outcome performance measures,17-19 and our findings suggest it may be important for waiting time measures too. Data audits, however, would still be necessary to ensure the comparability of data collection processes across hospitals, particularly since inconsistencies have been documented in the past.20 As more quality indicators are publicly reported in the future, it will be increasingly important to consider when and how adjustment of quality indicators is applied.

Received 28 September 2011, accepted 8 July 2012

  • Jessica Greene1
  • Jane Hall2

  • 1 Planning, Public Policy and Management, University of Oregon, Eugene, Ore, USA.
  • 2 Centre for Health Economics Research and Evaluation, University of Technology Sydney, Sydney, NSW.


Correspondence: jessicag@uoregon.edu

Acknowledgements: 

We thank Jan Blustein and the anonymous reviewers for their helpful comments on earlier versions of the manuscript. Jessica Greene acknowledges the Centre for Health Economics Research and Evaluation at the University of Technology Sydney, where she was the 2010–2011 Australian-American Health Policy Fellow in 2010–2011. Jessica Greene’s fellowship was supported by the Australian Department of Health and Ageing and The Commonwealth Fund.

Competing interests:

Jane Hall is a member of the board of the NSW Bureau of Health Information (BHI). The views presented here are those of the authors and not necessarily those of the Fellowship supporters or the BHI.

Author

remove_circle_outline Delete Author
add_circle_outline Add Author

Comment
Do you have any competing interests to declare? *

I/we agree to assign copyright to the Medical Journal of Australia and agree to the Conditions of publication *
I/we agree to the Terms of use of the Medical Journal of Australia *
Email me when people comment on this article

Online responses are no longer available. Please refer to our instructions for authors page for more information.