Gaining value from decision support in electronic medication management systems requires a well evidenced approach
Compelling evidence has shown that medication errors and poor medication management result in preventable harm at an estimated annual cost of $1.2 billion.1 Hospitalised patients in Australia experience an average of 1–1.5 prescribing errors during an admission, and about 9% of medications administered are associated with some form of clinical error.1,2
Electronic medication management (eMM) systems target the sources of medication errors and inappropriate medication use in hospitals. To drive safe and appropriate use of medications, and do so on a scale that has not previously been possible, electronic medication data needs to be shared across health care silos (hospital, community and aged care settings). Implementation of eMM systems is an important element in realising this potential. Across Australia, eMM roll‐out is underway, with some hospitals now having over a decade's experience. Here, we bring together evidence of the effectiveness of eMM systems, with a view to advancing medication safety in Australian hospitals. We discuss how improving the design and sophistication of electronic decision support systems will be fundamental to achieving desired medication safety and patient outcomes.
Evidence of eMM system effectiveness in Australian hospitals
To date, few Australian studies on eMM system effectiveness have been carried out. The only controlled before‐and‐after study reported greater than 50% reductions in prescribing error rates at two teaching hospitals, each of which implemented a different commercial eMM system with limited decision support.2 A before‐and‐after study of an eMM system with an integrated dispensing system in outpatient clinics and an emergency department showed that the eMM system was associated with a significant decrease in medication errors.3 New types of errors associated with the use of an eMM system have been noted at several hospitals; these have generally been low severity, and related to factors such as incorrect selection of patients and drugs from drop‐down menus.4,5 To our knowledge, no published Australian studies, and few overseas studies, have sought to demonstrate a reduction in medication‐related harm following implementation of an eMM system.
Current approaches to selecting electronic decision‐support alerts and the need for a more evidence‐based approach
There are considerable opportunities to leverage electronic medication data to drive decision‐support algorithms and thereby enhance decision making and improve patient outcomes. Sound research evidence indicates that well designed and targeted decision‐support alerts can prevent medication errors.6 However, there is a lack of evidence demonstrating that an entire suite of decision‐support alerts (eg, drug–drug interaction [DDI] alerts and dose range alerts) has a cumulative benefit in minimising errors and subsequent patient harm. A recent systematic review showed some positive outcomes on prescribing behaviours or patient outcomes in five out of six studies of drug–condition related alerts, and two out of six studies of drug–drug interaction alerts.6 No study investigated the effectiveness of combining multiple categories of alerts within an eMM system.6 This evidence gap is highly problematic as hospitals rarely activate a single alert type. A recent survey of 26 Australian hospitals with eMM systems showed that they had, on average, five alert types in their systems.7 All hospitals had implemented drug allergy and DDI alerts, with most also adopting dose range alerts.
Alert fatigue, an inevitable consequence of implementing too many alerts in a system, is now recognised as a widespread and persistent problem. It has been estimated that clinicians ignore up to 95% of alerts, rendering decision support ineffective in many instances.8 Australian doctors prescribing during ward rounds have been shown to receive alerts in 50% of medication orders.9 Attempts to alleviate alert fatigue by improving decision support have, on the whole, proven unsuccessful. Once alerts have been implemented, is it difficult to gain consensus on removing alerts.
A core question remains unanswered: how many alerts are too many? What we do know is that fewer alerts are likely to be more effective. But how do hospitals make decisions about which alerts to include? Currently these decisions appear to be largely driven by perceptions that alerts change prescriber behaviour and improve outcomes,7 and the belief that “more is better”. Hindering a more evidence‐based approach to targeted decision support is the lack of hospital data regarding the incidence of specific types of medication errors, and data on which types of errors pose the greatest risk of patient harm.
We draw on DDIs, the target of alerts in most Australian eMM systems, as an example of where limited data are available to support decision processes. We conducted a systematic review to assess the prevalence of DDIs in hospitals and associated patient harm. Twenty‐seven such studies were published between 2000 and 2016.10 About 33% of patients in general wards and 67% of patients in intensive care units experienced a potential DDI. Only four studies investigated DDIs which could potentially lead to harm given the patient's clinical profile, or reported actual harm resulting from DDIs.10 In that small sample, only about 2% of DDIs were associated with any harm to patients, and these were generally of low severity. Thus, there is currently a lack of compelling evidence that DDIs are a major source of patient harm in hospitals.
Yet, DDI alerts are the most frequently implemented by Australian hospitals in their eMM systems, with some incorporating >15 000 alerts to warn prescribers of potential DDIs. A legitimate question is whether the probability of harm from DDIs in hospitals warrants the inclusion of this volume of DDI alerts given the risk and consequences of alert fatigue. Drawing on existing epidemiological evidence of medication error prevalence should be an important component in discussions about which interruptive decision support alerts are included in hospital eMM systems, and a similar argument can be made for decision support in general practice systems.
The effectiveness of decision support is also highly context‐specific. This has been demonstrated in a study on the effects of eMM decision support on prescribing decisions during hospital ward rounds.9 Despite nearly 50% of medication orders generating an alert, fewer than 20% were read and no orders were changed in response. This was because senior clinicians decided on the medication orders, but junior doctors on the ward rounds entered the orders and received the alerts. Not once during this observational study did a junior doctor signal to the ward round team that a medication alert had been received. Thus, in this context, the decision support was rendered largely ineffective.9 A subsequent study of junior doctors prescribing alone at night, at the same hospital, showed a contrary result. Junior doctors read nearly 80% of alerts and changed about 5% of their medication orders in response.11 Thus both content and context of alert generation are central to the effectiveness of decision support.
Digital nudging and the future of decision support
Behavioural economists and psychologists have studied factors which influence individuals’ decision making. Such studies have shown that the way in which decision options are presented to users can influence their choices. For instance, providing an option to maintain the status quo12 will be selected over making a change (eg, asking people to opt out rather than opt in); items placed first on a list will be selected more frequently than subsequent items;13 and presenting antibiotic choice grouped according to narrow or broad spectrum, rather than listing individual drugs, results in a significant reduction in inappropriate antibiotic use.14 Placing tests or medications in an order‐set can increase their use, even in situations when not clinically appropriate.15 Applying elements of such choice architecture to “nudge” people to make a “desirable” decision can be effective in increasing adherence to clinical guidelines while avoiding problems associated with interruptive alerts.16,17
As hospitals increasingly engage in the design of more complex decision support, greater attention to the behavioural effects of design elements is required. The availability of electronic health record data escalates the potential for more sophisticated analytic and artificial intelligence approaches to drive new forms of decision support. Complex neural network approaches to achieve deep learning (automated learning that does not require programming) are being applied to electronic health record systems to deliver a new generation of decision support. For example, one US project, Deep Patient, is leveraging information from the electronic records of 700 000 patients to more accurately predict the probability of patients developing diseases or be admitted again, and to make intervention recommendations such as most appropriate medications.18 Digital nudging which seeks to influence decision making through interface design will be essential to supporting the effectiveness of such future decision support.19
Gaining value from decision support in eMM systems requires a targeted, well evidenced approach with rigorous governance, evaluation and monitoring of effectiveness. The ability to incorporate multiple alerts, with a view that more will be better, is currently driving the inclusion of large volumes of interruptive alerts in our eMM systems. As other authors have highlighted, a fear of missing something often looms large in decisions about electronic clinical system design, even when a negative event is unlikely.20 Nudging hospitals towards a more evidence‐based approach, and learning from nudge theory and choice architecture about effective ways of presenting information to guide decision making, may present the best way forward to achieving substantial and sustainable improvements in patient outcomes, while reducing the cognitive burden on clinicians.
Provenance: Commissioned; externally peer reviewed.
- 1. Roughead EE, Semple SJ, Rosenfeld E. The extent of medication errors and adverse drug reactions throughout the patient journey in acute care in Australia. Int J Evid Based Healthc 2016; 14: 113–122.
- 2. Westbrook J, Reckmann M, Li L, et al. Effects of two commercial electronic prescribing systems on prescribing error rates in hospital inpatients: a before and after study. PLoS Med 2012; 9: e1001164.
- 3. Hodgkinson MR, Larmour I, Lim S, et al. The impact of an integrated electronic medication prescribing and dispensing on prescribing and dispensing errors: a before and after study. J Pharm Pract Res 2017; 47: 110–120.
- 4. Van de Vreede M, de Clifford J, McGrath A. Staff experience and perceptions of the safety and risks of electronic medication management systems in Victorian public hospitals. J Pharm Pract Res 2018; 48: 18–25.
- 5. Westbrook JI, Baysari MT, Li L, et al. The safety of electronic prescribing: manifestations, mechanisms, and rates of system‐related errors associated with two commercial systems in hospitals. J Am Med Inform Assoc 2013; 20: 1159–1167.
- 6. Page N, Baysari MT, Westbrook JI. A systematic review of the effectiveness of interruptive medication prescribing alerts in hospital CPOE systems to change prescriber behavior and improve patient safety. Int J Med Inform 2017; 105: 22–30.
- 7. Page N, Baysari MT, Westbrook JI. A national survey of the use of decision support prescribing alerts in electronic medication management systems in Australian hospitals. J Pharm Pract Res 2019. In press.
- 8. Bryant AD, Fletcher GS, Payne TH. Drug interaction alert override rates in the meaningful use era: no evidence of progress. App Clin Inform 2014; 5: 802–813.
- 9. Baysari MT, Westbrook JI, Richardson KL, et al. The influence of computerized decision support on prescribing during ward‐rounds: are the decision‐makers targeted? J Am Med Inform Assoc 2011; 18: 754–759.
- 10. Zheng WY, Richardson LC, Li L, et al. Drug–drug interactions and their harmful effects in hospitalised patients: a systematic review and meta‐analysis. Eur J Clin Pharmacol 2018; 74: 15–27.
- 11. Jaensch SL, Baysari MT, Day RO, et al. Junior doctors’ prescribing work after‐hours and the impact of computerized decision support. Int J Med Inform 2013; 82: 980–986.
- 12. Samuelson W, Zeckhauser R. Status quo bias in decision making. J Risk Uncertain 1988; 1: 7–59.
- 13. Koppell J, Steen J. The effects of ballot position on election outcomes. J Politics 2004; 66: 267–281.
- 14. Tannenbaum D, Doctor JN, Persell SD, et al. Nudging physician prescription decisions by partitioning the order set: results of a vignette‐based study. J Gen Intern Med 2015; 30: 298–304.
- 15. Leis B, Frost A, Bryce R, et al. Standard admission order sets promote ordering of unnecessary investigations: a quasi‐randomised evaluation in a simulated setting. BMJ Qual Saf 2017; 26: 938–940.
- 16. Thaler R, Sunstein C. Nudge: improving decisions about health, wealth and happiness. London: Penguin, 2008.
- 17. Bourdeaux CP, Davies KJ, Thomas MJC, et al. Using “nudge” principles for order set design: a before and after evaluation of an electronic prescribing template in critical care. BMJ Qual Saf 2014; 23: 382–388.
- 18. Miotto R, Li L, Kidd BA, et al. Deep Patient: an unsupervised representation to predict the future of patients from the electronic health records. Sci Rep 2016; 6: 26094.
- 19. Weinmann M, Schneider C, vom Brocke J. Digital nudging. Bus Inform Syst Engineer 2016; 58: 433–436.
- 20. Vaughn VM, Linder JA. Thoughtless design of the electronic health record drives overuse, but purposeful design can nudge improved patient care. BMJ Qual Saf 2018; 27: 583–586.
Publication of your online response is subject to the Medical Journal of Australia's editorial discretion. You will be notified by email within five working days should your response be accepted.