Connect
MJA
MJA

Artificial intelligence for surgical services in Australia and New Zealand: opportunities, challenges and recommendations

Joshua G Kovoor, Stephen Bacchi, Prakriti Sharma, Srishti Sharma, Medhir Kumawat, Brandon Stretton, Aashray K Gupta, WengOnn Chan, Amal Abou‐Hamden and Guy J Maddern
Med J Aust || doi: 10.5694/mja2.52225
Published online: 4 March 2024

Artificial intelligence (AI) is being rapidly taken up by society, including health care services, and will inevitably be used broadly within the surgical services of Australia and New Zealand. However, the process of AI implementation must be evidence‐based, safe, and ethically cautious,1 and must adhere to recommendations of the international surgical data science community.2 AI has numerous limitations and should always serve as an adjunct that benefits outcomes, rather than replacement of the staff within surgical systems. This perspective discusses opportunities and challenges for the use of AI in the surgical services of Australia and New Zealand and provides recommendations for the future.


Current non‐surgical views on AI in Australia and New Zealand

Non‐surgical colleges across Australia and New Zealand have begun to discuss AI, but, at the time of writing, of the 25 specialist medical colleges within the Australian Medical Council, only three (12%) have published AI position statements.3,4,5 This will probably change with time, and surgical bodies should remain aware of current opinions held by their non‐surgical counterparts. There is broad consensus that greater implementation of AI may increase care efficiency, potentially facilitating service provision across demographic groups and rural populations; although there also exists a risk of augmenting existing health inequity.5 Further, AI might improve diagnostic accuracy and efficiency, particularly for rare diseases.5 However, significant concern exists regarding ethical issues and other risks. AI presents potential liability implications arising from automation bias, and patient safety concerns from decreased sophistication in human decision making.3,4,5,6,7 As surgical decision making often carries substantial consequences, any degree of AI automation should first be scientifically interrogated and should be implemented only when there is demonstrable patient and system benefit, ensuring surgical staff roles are preserved.6

When integrating AI within surgical systems, it is important that the views of wider Australian and New Zealand society, a key stakeholder group, are also considered. A 2021 systematic review of the global literature on patient and general public attitudes to clinical AI found overall positive attitudes but many reservations and a preference for human supervision.8 However, an update of this review might produce different results, given how rapidly AI has been taken up by broader society through tools such as ChatGPT. Specifically for surgery, evidence is limited, but studies have found patient acceptability towards AI, apart from fully autonomous surgery.9 Although it is crucial to conduct serial assessments of Australian and New Zealand general public opinion to understand the current state of play, large studies such as the AVA‐AI study provide reliable evaluations of national views.10 Within this large Australian survey, about 60% of respondents supported the development of AI, but this decreased to 27–43% when questions regarding specific health care scenarios were posed.10 A similar study of the Australian and New Zealand general public has not been conducted for the use of AI by surgical services specifically, and this presents an important gap for future research.

Development and implementation

The global AI boom presents opportunities to substantially enhance the quality and efficiency of Australia and New Zealand surgical services, both at the systems level and in the pre‐operative, intra‐operative and post‐operative phases of individual patient journeys. At the systems level, locally developed AI tools, such as the Adelaide Score developed in South Australia, can potentially increase efficiency through providing additional predictive data relating to events that occur in each inpatient admission, such as hospital discharge.11 For individual surgical patients, AI tools can enhance clinical decision making at all time points, such as pre‐operative informed consent and risk assessment, intra‐operative precision and vision, and post‐operative care for improved recovery and follow‐up.12 However, when exploring these opportunities for surgical services, it is imperative that all AI tools are developed, validated and implemented using evidence‐based and internationally accepted approaches, while also being trained with local data. This is challenging but crucial, regardless of the planned use of the AI tool. Currently, relevant evidence for AI use in Australian and New Zealand surgical services is mostly limited to early phase studies that might not reflect real‐world practice. Prospective AI implementation trials within the real‐world clinical context are required, and only after clear benefits are demonstrated for surgical patients and systems, they can be regularly used in Australian and New Zealand surgical services.1 There is currently a paucity of randomised trials regarding AI interventions, although this will change in coming years.13 Further, multiple statements have now been developed for reporting AI research and are listed by the EQUATOR network.14 It is crucial that similar frameworks for AI use are developed specifically for surgical services in Australia and New Zealand, so that the present opportunities can be explored safely, while also ensuring optimal benefit for local patients and systems. Until these are more developed, local surgical technology assessment organisations, such as ASERNIP‐S within the Royal Australasian College of Surgeons (RACS),15 can assist with ensuring adherence to evidence‐based principles during the uptake of AI by Australian and New Zealand surgical services.

Post‐implementation monitoring and audit

After opportunities for implementing AI are pursued by Australian and New Zealand surgical services, an additional challenge is ensuring appropriate and effective post‐implementation monitoring and audit processes. These processes must evaluate ongoing outcome benefits to surgical patients, staff and systems, while also ensuring that local factors relevant to individual surgical services continue to be optimally considered. Adequate infrastructure for audit, retraining (if suboptimally providing benefit or considering local factors), and abandonment (if unresolvable surgical safety concerns are identified during audit) for surgical AI tools should be in place before the broad use of the tools within clinical practice. It is essential that surgical services also correspond and collaborate with local and national audit bodies, such as the Australian and New Zealand Audit of Surgical Mortality of the RACS, when using AI tools that could have significant outcome implications. Surgical patient safety and confidentiality must be maintained, particularly with increasing commercial interests in AI that may potentially conflict with the overall public good.16 As with any surgical innovation, AI must be rigorously audited and its use within the surgical services of Australia and New Zealand must always adhere to evidence‐based principles.16

Engagement with regulatory bodies

The integration of AI within Australia and New Zealand surgical services must be carefully regulated, and this presents an important challenge when clinical opportunities are being pursued. If done too rapidly and with little governance or consideration of necessary protection from profit‐driven commercial entities, unpredicted risks could be conferred to future surgical patients. Surgical oversight bodies such as the RACS play a crucial role for this, but engagement with external regulatory bodies is also required. Novel AI devices may also be required to obtain pre‐market approval from the Therapeutic Goods Administration (TGA) to be listed in the Australian Register of Therapeutic Goods before widespread implementation, which would allow surgical AI devices to be legally supplied in Australia.6 Technically, surgical devices that incorporate AI are classified as software as a medical device (SaMD), regulated by the TGA. The TGA is also a founding member of the International Medical Device Regulators Forum, therefore operating with the goal of remaining consistent with the International Medical Device Regulators Forum regarding the regulation of SaMDs. The Australian Commission on Safety and Quality in Health Care also provides standards for AI and automated decision making in the health care sector.17 From a legislative point of view within Australia, both the Therapeutic Goods Act 1989 and the Therapeutic Goods (Medical Devices) Regulation 2002 specify the conformity assessment procedures, classification rules and principles for regulation in Australia, which also apply to AI in surgical systems. Further, bodies such as the Australian Ethical Health Alliance and the Australian Alliance for Artificial Intelligence in Healthcare also provide guidance for regulating AI that is applicable to Australian and New Zealand surgical systems.18 Overall, there is a need for regulatory authorities to provide a clear framework specifically for the use of AI by surgical services within Australia and New Zealand.

Ethical considerations

Alongside the challenges raised, substantial ethical dangers accompany AI, and caution must be applied when exploring opportunities to integrate this technology to assist surgical care.19 Overriding principles such as “first, do no harm” must always be upheld. AI algorithm performance is influenced by the characteristics of training datasets, and outcome benefits can differ depending on whether public data or real‐world local data are used.20 Further, there may be bias in training data, which risks the AI algorithms perpetuating or exacerbating pre‐existing inequalities (such as race, sex, age or socio‐economic status), leading to discriminatory outcomes.21 Surgeons may treat a specific patient population due to the location of their institution or specialised professional interests, and algorithms trained on datasets derived at a population‐level may perform suboptimally at a local level surgical services should be aware of potential biases in algorithms and limitations of training data and regularly audit AI‐driven systems after local deployment.

AI systems are frequently described as “black boxes” due to the absence of reproducible reasoning underpinning their decision‐making processes.22 This lack of transparency may make it difficult for surgeons to have confidence in AI‐assisted recommendations, particularly in circumstances where there are major differences between specialist surgeon opinion and AI. Furthermore, surgeons may simultaneously encounter difficulty in explaining these recommendations to patients, given that it may only be possible to explain the inputs and outputs of the algorithms rather than the internal processes.22 Australian and New Zealand surgical staff should become familiar with interpreting and transparently communicating the inputs and outputs of AI tools, encourage patients to ask questions and express concerns, and provide information to aid patient‐friendly explanations. In addition, this “black box” intermediary may also make informed consent more challenging, although similar concessions are made when prescribing efficacious medications with mechanisms of action that are not entirely understood.23 Maintaining the shared decision‐making process while using AI algorithms may similarly prove challenging, as they often do not currently account for patient values.24 Surgical staff should be vigilant when using AI‐generated recommendations and ensure alignment with patient interests while also informing patients about factors considered and excluded by AI decision‐support tools.

Surgical decision making is complex, which adds challenges to liability and accountability when AI is used by surgical services.24 Current malpractice guidelines are unlikely to adequately consider the complexities introduced by AI, making it difficult to determine responsibility and assign blame where human and AI factors both contribute to adverse surgical outcomes.25 Surgical staff should refrain from substituting their clinical expertise with AI‐based recommendations, instead using AI as an adjunct tool to increase the quality of their care.26 Given the known effect of automation bias,27 it should be acknowledged that this suggestion may not be feasible in all surgical scenarios. When using AI tools, surgical staff should comprehensively document their rationale for all clinical decision making, particularly any deviations from AI recommendations. Similarly, it is imperative that surgical staff comprehend the ethical implications for patient privacy and confidentiality when AI is integrated within their service. As clinical AI systems are likely to require sensitive patient data, surgical staff must adequately inform patients, including risks such as misuse or data breaches. Patients must be educated about the possibility that certain AI systems may use these inputs for continuous model training or retain their information for future reference.28 Surgical staff must also offer opt‐out processes where feasible. Ultimately, clear policies must be established for the AI handling of sensitive patient information within the surgical systems of Australia and New Zealand.

Future directions and recommendations

The surgical services of Australia and New Zealand currently have an opportunity to become international leaders in the safe, reliable and effective use of AI. Recommendations to seize this opportunity and overcome relevant challenges are provided in the Box. An evidence‐based approach must be maintained, and the significant ethical concerns always addressed. As with any new surgical technology, there must be careful critical appraisal, regulation and post‐implementation monitoring and audit. It is important to emphasise that AI is an assistive technology to help, not replace, surgical staff and patients. Efforts should be made to educate surgical patients and staff in Australia and New Zealand on the use of AI, including its benefits and limitations. There is the need for further guideline statements to provide frameworks specifically relevant to the use of AI by surgical services across Australia and New Zealand, and for broad collaboration between these services during the widespread uptake of AI to ensure safety at a national scale.

Box – Recommendations for the future use of artificial intelligence (AI) by the surgical services of Australia and New Zealand

  • Understand and consider current opinions on AI held by the non‐surgical health care communities and the wider society of Australia and New Zealand through serial evaluation.
  • Maintain a strict evidence‐based approach when developing and implementing AI tools within Australian and New Zealand surgical services that is adherent to internationally recognised frameworks but also considers local factors, regardless of the aspect of surgical care, and ensures risks and benefits to patients and systems are rigorously evaluated.
  • Develop necessary infrastructure for strict post‐implementation monitoring and audit of AI tools to ensure ongoing patient and system benefit, in alignment with the principles of the Royal Australasian College of Surgeons.
  • Ensure close and ongoing engagement with the regulatory bodies and laws of Australia and New Zealand to guarantee adequate governance and maintenance of patient and staff benefit, particularly amidst interests from commercial entities.
  • Be aware of ethical risks associated with AI and take approaches that address these risks when implementing AI tools, such as ensuring data security.
  • Efforts should be made to educate Australian and New Zealand surgical patients and staff on the use of AI, including its benefits and limitations.
  • Produce guidelines specifically relating to the use of AI by surgical services within Australia and New Zealand.
  • Promote broad collaboration between the surgical services of Australia and New Zealand to ensure safe AI use at a national scale.

Provenance: Not commissioned; externally peer reviewed.

  • Joshua G Kovoor1,2
  • Stephen Bacchi3
  • Prakriti Sharma4
  • Srishti Sharma4
  • Medhir Kumawat1
  • Brandon Stretton1
  • Aashray K Gupta1
  • WengOnn Chan1,5
  • Amal Abou‐Hamden1,6
  • Guy J Maddern1,5

  • 1 University of Adelaide, Adelaide, SA
  • 2 Ballarat Base Hospital, Ballarat, VIC
  • 3 Lyell McEwin Hospital, Adelaide, SA
  • 4 Flinders University, Adelaide, SA
  • 5 Queen Elizabeth Hospital, Adelaide, SA
  • 6 Royal Adelaide Hospital, Adelaide, SA


Correspondence: guy.maddern@adelaide.edu.au


Open access:

Open access publishing facilitated by The University of Adelaide, as part of the Wiley – The University of Adelaide agreement via the Council of Australian University Librarians.


Competing interests:

No relevant disclosures.

  • 1. Kovoor JG, Bacchi S, Gupta AK, et al. Artificial intelligence clinical trials and critical appraisal: a necessity. ANZ J Surg 2023; 93: 1141‐1142.
  • 2. Maier‐Hein L, Eisenmann M, Sarikaya D, et al. Surgical data science — from concepts toward clinical translation. Med Image Anal 2022; 76: 102306.
  • 3. Australasian College of Dermatologists. Position statement: use of artificial intelligence in dermatology in Australia. Sydney: ACD, 2022. https://www.dermcoll.edu.au/wp‐content/uploads/2022/11/ACD‐Position‐Statement‐Use‐of‐Artifical‐Intelligence‐in‐Dermatology‐in‐Australia‐Nov‐2022.pdf (viewed May 2023).
  • 4. Royal Australian and New Zealand College of Radiologists. Standards of practice for artificial intelligence. Sydney: RANZCR, 2020. https://www.ranzcr.com/our‐work/artificial‐intelligence (viewed Jan 2024).
  • 5. Royal Australian College of General Practitioners. Artificial intelligence in primary care. Melbourne: RACGP, 2021. https://www.racgp.org.au/advocacy/position‐statements/view‐all‐position‐statements/clinical‐and‐practice‐management/artificial‐intelligence‐in‐primary‐care (viewed May 2023).
  • 6. Australian Medical Association. Automated decision making and AI regulation — AMA submission to the Prime Minister and Cabinet consultation. Canberra: AMA, 2022. https://www.ama.com.au/articles/automated‐decision‐making‐and‐ai‐regulation‐ama‐submission‐prime‐minister‐and‐cabinet (viewed May 2023).
  • 7. World Health Organization. Ethics and governance of artificial intelligence for health: WHO guidance. Geneva: WHO, 2021. https://www.who.int/publications/i/item/9789240029200 (viewed May 2023).
  • 8. Young AT, Amara D, Bhattacharya A, Wei ML. Patient and general public attitudes towards clinical artificial intelligence: a mixed methods systematic review. Lancet Digit Health 2021; 3: e599‐e611.
  • 9. Palmisciano P, Jamjoom AAB, Taylor D, et al. Attitudes of patients and their relatives toward artificial intelligence in neurosurgery. World Neurosurg 2020; 138: e627‐e633.
  • 10. Isbanner S, O'Shaughnessy P, Steel D, et al. The adoption of artificial intelligence in health care and social services in Australia: findings from a methodologically innovative national survey of values and attitudes (the AVA‐AI study). J Med Internet Res 2022; 24: e37611.
  • 11. Kovoor JG, Bacchi S, Gupta AK, et al. The Adelaide Score: an artificial intelligence measure of readiness for discharge after general surgery. ANZ J Surg 2023; 93: 2119‐2124.
  • 12. Loftus TJ, Altieri MS, Balch JA, et al. Artificial intelligence‐enabled decision support in surgery: state‐of‐the‐art and future directions. Ann Surgery 2023; 278: 51‐58.
  • 13. Plana D, Shung DL, Grimshaw AA, et al. Randomized clinical trials of machine learning interventions in health care: a systematic review. JAMA Network Open 2022; 5: e2233946.
  • 14. EQUATOR Network. Search Results for: artificial intelligence. Oxford: UK EQUATOR Centre, University of Oxford; 2023. https://www.equator‐network.org/?s=artificial+intelligence&submit=Go (viewed Dec 2023).
  • 15. Maddern GJ, Babidge WJ, Faulkner KW. ASERNIP‐S: unusual acronym, outstanding results. ANZ J Surg 2020; 90:670‐674.
  • 16. Tan L, Tivey D, Kopunic H, et al. Part 1: artificial intelligence technology in surgery. ANZ J Surg 2020; 90: 2409‐2414.
  • 17. Mullins G. Regulating AI and ADM in healthcare and HMR. Research Australia 2022; 19 May. https://researchaustralia.org/?s=Artificial+intelligence (viewed July 2023).
  • 18. Australian Alliance for Artificial Intelligence in Healthcare. A roadmap for AI in healthcare for Australia. Sydney: AAAIH, Macquarie University; 2021. https://aihealthalliance.org/2021/12/01/a‐roadmap‐for‐ai‐in‐healthcare‐for‐australia/ (viewed July 2023).
  • 19. Lam K, Abràmoff MD, Balibrea JM, et al. A Delphi consensus statement for digital surgery. NPJ Digit Med 2022; 5: 100.
  • 20. Kirtac K, Aydin N, Lavanchy JL, et al. Surgical phase recognition: from public datasets to real‐world data. Applied Sciences 2022; 12: 8746.
  • 21. Rajkomar A, Hardt M, Howell MD, et al. Ensuring fairness in machine learning to advance health equity. Ann Intern Med 2018; 169: 866‐872.
  • 22. London AJ. Artificial intelligence and black‐box medical decisions: accuracy versus explainability. Hastings Cent Rep 2019; 49: 15‐21.
  • 23. Hindocha S, Badea C. Moral exemplars for the virtuous machine: the clinician's role in ethical artificial intelligence for healthcare. AI and Ethics 2022; 2: 167‐175.
  • 24. Loftus TJ, Tighe PJ, Filiberto AC, et al. Artificial intelligence and surgical decision‐making. JAMA Surg 2020; 155: 148‐158.
  • 25. Holm S, Stanton C, Bartlett B. A new argument for no‐fault compensation in health care: the introduction of artificial intelligence systems. Health Care Anal 2021; 29: 171‐188.
  • 26. Topol EJ. High‐performance medicine: the convergence of human and artificial intelligence. Nat Med 2019; 25: 44‐56.
  • 27. Alon‐Barkat S, Busuioc M. Human–AI interactions in public sector decision making: “automation bias” and “selective adherence” to algorithmic advice. Journal of Public Administration Research and Theory 2023; 33: 153‐169.
  • 28. Vayena E, Blasimme A, Cohen IG. Machine learning in medicine: addressing ethical challenges. PLoS Med 2018; 15: e1002689.

Author

remove_circle_outline Delete Author
add_circle_outline Add Author

Comment
Do you have any competing interests to declare? *

I/we agree to assign copyright to the Medical Journal of Australia and agree to the Conditions of publication *
I/we agree to the Terms of use of the Medical Journal of Australia *
Email me when people comment on this article

Online responses are no longer available. Please refer to our instructions for authors page for more information.