Effect of an artificial intelligence–powered programmed death-ligand 1 combined positive score analyzer in urothelial cancer on inter-observer and inter-site variability.

Authors

null

Soo Ick Cho

Oncology, Lunit, Seoul, South Korea

Soo Ick Cho , Jeong Hwan Park , Kyu Sang Lee , Euno Choi , Wonkyung Jung , Sanghoon Song , Sukjun Kim , Jisoo Shin , Jeongun Ryu , Aaron Valero Puche , Biagio Brattoli , Seonwook Park , Kyunghyun Paeng , Chan-Young Ock

Organizations

Oncology, Lunit, Seoul, South Korea, Department of Pathology, Boramae Medical Center, Seoul, South Korea, Department of Pathology, Seoul National University Bundang Hospital, Seongnam, South Korea, Department of Pathology, Ewha Womans University Mokdong hospital, Seoul, South Korea, Oncology, Lunit, Seoul, Korea, Democratic People's Republic of (North), Oncology, Lunit, Seoul, Korea, Republic of (South)

Research Funding

No funding received
None.

Background: Programmed death-ligand 1 (PD-L1) is a predictive marker for immune checkpoint inhibitors treatment response in urothelial carcinoma (UC). The combined positive score (CPS) is a representative method to evaluate the expression level of PD-L1 in UC. However, inter-observer and inter-institute variations can disrupt accurate CPS evaluation. The purpose of this study is to assess the role of an artificial intelligence (AI)-powered PD-L1 CPS analyzer on UC in reducing inter-observer and inter-institute variability. Methods: Lunit SCOPE PD-L1 CPS was developed with 4.94 x 105 tumor cells and 4.17 x 105 immune cells from 360 PD-L1 stained whole-slide images (WSIs) of UC from multiple institutions. The algorithm consisted of tissue area segmentation and cell detection AI models. The AI models calculated the CPS by detecting tumor cells over the tumor area and immune cells over the tumor and adjacent area. Three uropathologists from different university hospitals evaluated the CPS classification (≥10 or <10) of 543 PD-L1 stained WSIs of UC from each hospital. The result with concordant CPS classification of each slide across ≥ 2 pathologists was considered the consensus. Each pathologist revisited to evaluate WSIs by referencing the AI model inference, after a washout period, if there was a discrepancy between the pathologist and the AI model. Results: Of 543 WSIs, 446 (82.1%) were classified as the same CPS subgroup by all three uropathologists. Also, pathologists had a high degree of concordance with the consensus in WSIs from their own hospitals. They re-evaluated 64, 73, and 75 WSIs with AI assistance, respectively, and changed the CPS classification for 47, 48, and 48 WSIs. After re-evaluation with AI assistance, three uropathologists agreed on the same CPS classification in 510 WSIs (93.9%). The overall percentage agreement (OPA) of each pathologist with the consensus increased from 95.0%, 94.8%, and 92.3% to 98.7%, 98.3%, and 96.9% after AI assistance, and the OPA for WSIs of other institutions increased more compared to the OPA for WSIs of their own hospital. Conclusions: This study shows that an AI-powered PD-L1 CPS analyzer in UC can reduce inter-observer and inter-site variability. This result suggests that the AI model will help evaluate CPS in UC more accurately and reduce variation in situations where pathologists analyze WSIs from unfamiliar institutions.

The OPA of each pathologist to the consensus from three pathologists, before and after AI assistance.

Before / After AI assistanceHospital A (n = 93)Hospital B (n = 205)Hospital C (n = 245)Total (n = 543)
Hospital A pathologist96.8% / 98.9%92.7% / 98.5%96.3% / 98.8%95.0% / 98.7%
Hospital B pathologist95.7% / 100.0%95.6% / 97.1%93.9% / 98.8%94.8% / 98.3%
Hospital C pathologist81.7% / 96.8%90.2% / 98.0%98.0% / 95.9%92.3% / 96.9%
AI standalone91.4%89.8%88.6%89.5%

Disclaimer

This material on this page is ©2024 American Society of Clinical Oncology, all rights reserved. Licensing available upon request. For more information, please contact licensing@asco.org

Abstract Details

Meeting

2023 ASCO Annual Meeting

Session Type

Publication Only

Session Title

Publication Only: Care Delivery and Regulatory Policy

Track

Care Delivery and Quality Care

Sub Track

Clinical Informatics/Advanced Algorithms/Machine Learning

Citation

J Clin Oncol 41, 2023 (suppl 16; abstr e13546)

DOI

10.1200/JCO.2023.41.16_suppl.e13546

Abstract #

e13546

Abstract Disclosures

Similar Abstracts

First Author: Taebum Lee

First Author: Tanja Ovcaricek

Abstract

2024 ASCO Gastrointestinal Cancers Symposium

Noninvasive assessment of programmed-death ligand-1 (PD-L1) in esophagogastric (EG) cancer using 18F-BMS-986229 PET.

First Author: Samuel Louis Cytryn