The utility of ChatGPT in subspecialty consultation for patients (pts) with metastatic genitourinary (GU) cancer.

Authors

null

Ayana Srivastava

Huntsman Cancer Institute at the University of Utah, Salt Lake City, UT

Ayana Srivastava , Gliceida Galarza Fortuna , Beverly Chigarira , Emre Dal , Chadi Hage Chehade , Georges Gebrael , Arshit Narang , Neeraj Agarwal , Umang Swami , Haoran Li

Organizations

Huntsman Cancer Institute at the University of Utah, Salt Lake City, UT, University of Kansas Cancer Center, Westwood, KS

Research Funding

No funding sources reported

Background: Cancer management requires a multidisciplinary approach, often requiring medical consultation from subspecialists. With the advent of artificial intelligence (AI) technologies like ChatGPT, it is hypothesized that these tools may help expedite the consultation process. This study aimed to assess the efficacy of ChatGPT in providing guideline-based sub-speciality recommendations for managing pts with metastatic GU cancer. Methods: In this single-institution, IRB-approved, retrospective, proof-of-concept study, pts with metastatic GU cancer over the past 3 years were screened. Those with at least one consultation referral to subspecialty clinics were randomly selected. ChatGPT 3.5 was given the most recent clinic note that triggered sub-specialty consultation. The AI tool was then asked to provide an assessment and plan. Two physicians independently assessed the accuracy of diagnoses made by ChatGPT and subspecialty physicians. The primary outcome was the consistency of ChatGPT recommendations with those of subspecialty physicians. Secondary Outcomes included potential time saved by using ChatGPT and comparison of medical decision-making (MDM) complexity levels between ChatGPT and subspecialty physicians. Results: A total of 39 pts were included. Their primary diagnoses included prostate cancer (51.3%), bladder cancer (23.1%) and kidney cancer (15.4%). The referred subspecialty clinic included cardiology (33.3%), hematology (17.9%), hepatology (2.6 %), hospice (10.3%), neurology (12.8%), pulmonary (15.4 %), and rheumatology (7.7%). The average waiting time for pts to be seen in subspecialty clinics was 44.9 days (SD = 42.4). Of 39 patient’s charts reviewed by ChatGPT, 30/39 (76.9%) had the same diagnosis with consultant sub-specialties. The average diagnoses made by ChatGPT were 8.2, compared with 3.4 made by sub-specialty physicians (p < 0.0001). The accuracy of diagnoses made by ChatGPT was the same, higher, and lower than human physicians in 10 (33.3%), 3 (10%), and 17 (56.7%) cases, respectively. Consistency of treatment plans between ChatGPT and physicians was found in 18 cases (46.2%). ChatGPT recommended additional workup in 32 cases (85.1%). The average number of words written in consultation notes by ChatGPT was 362.7 (SD = 72.9), which was significantly greater than subspecialty physicians (n = 224.7, p < 0.0001). Conclusions: These hypothesis-generating data suggest the potential utility of ChatGPT to assist medical oncologists in managing increasingly complex pts with metastatic cancer. Further studies are needed to validate our findings.

Disclaimer

This material on this page is ©2024 American Society of Clinical Oncology, all rights reserved. Licensing available upon request. For more information, please contact licensing@asco.org

Abstract Details

Meeting

2024 ASCO Genitourinary Cancers Symposium

Session Type

Poster Session

Session Title

Poster Session A: Prostate Cancer

Track

Prostate Cancer - Advanced,Prostate Cancer - Localized

Sub Track

Other

Citation

J Clin Oncol 42, 2024 (suppl 4; abstr 227)

DOI

10.1200/JCO.2024.42.4_suppl.227

Abstract #

227

Poster Bd #

K15

Abstract Disclosures