Performance validation of an artificial intelligence-powered PD-L1 combined positive score analyzer in six cancer types.

Authors

null

Taebum Lee

Department of Pathology, Chonnam National University Hospital, Gwangju, South Korea

Taebum Lee , Soo Ick Cho , Sangjoon Choi , Sukjun Kim , Wonkyung Jung , Dasom Lee , Seungje Lee , Mohammad Mostafavi , Seonwook Park , Jinhee Lee , Jaewoong Shin , Seokhwi Kim , Kyunghyun Paeng , Chan-Young Ock

Organizations

Department of Pathology, Chonnam National University Hospital, Gwangju, South Korea, Oncology, Lunit, Seoul, South Korea, Department of Pathology and Translational Genomics, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, South Korea, Department of Pathology, Ajou University School of Medicine, Suwon, South Korea, Lunit Inc., Seoul, Korea, Republic of (South)

Research Funding

No funding received
None.

Background: Programmed death-ligand 1 (PD-L1) expression is a predictive marker for immune checkpoint inhibitors (ICI) treatment in various cancer types. The evaluation of PD-L1 expression level by combined positive score (CPS) correlates with immunotherapeutic response in biliary tract, colorectum, liver, pancreas, prostate, and gastric cancers. This study aimed to assess the performance of an artificial intelligence (AI)-powered PD-L1 CPS analyzer on these six cancer types, and to investigate whether the AI assistance could improve concordance among pathologists. Methods: Lunit SCOPE PD-L1 CPS was developed with 1.51 x 106 tumor cells and 8.73 x 105 immune cells from 2,372 PD-L1 stained whole-slide images (WSI) or tissue microarray cores from various cancer and normal tissues. The algorithm consisted of tissue area segmentation and cell detection AI models. The AI models calculated the CPS by detecting tumor cells over the tumor area and immune cells over the tumor and adjacent area. The model performance was validated on 135 PD-L1 stained WSIs including the six cancer types, which were interpreted by three pathologists. The concordant CPS classification (≥1 or <1) by two or more pathologists was considered as the consensus. Each pathologist revisited to evaluate WSIs with AI assistance (including visualization and scoring) if there was a discrepancy between the pathologist and the AI model. Results: Of 135 WSIs, 122 (90.4%) were classified as the same CPS subgroup by all three pathologists. The CPS ≥1 and <1 subgroup included 67 (49.6%) and 68 (50.4%) cases, respectively. The overall percent agreement (OPA) of the AI model to the consensus of pathologists was 84.4%, ranging from 78.3% (liver) to 91.3% (biliary tract). The AI-assisted re-evaluation by three pathologists was performed in 17, 19, and 22 WSIs, respectively. According to the AI-assisted revision, the unanimous agreement level was increased to 92.6% (125 cases). The OPA of the AI model to the consensus of pathologists was also increased to 91.9%, ranging from 82.6% (liver) to 100.0% (pancreas). Conclusions: This study shows that an AI-powered PD-L1 CPS analyzer can evaluate the CPS in the six cancer types analyzed here at a comparable level to pathologists. AI assistance can improve the concordance of pathologists' CPS interpretation.

The concordance rate of pathologists’ CPS evaluation and OPA of AI model to the pathologists’ consensus, before and after the AI assistance.

Concordance rate of pathologists (before/after AI assistance)OPA of AI model to the pathologists’ consensus (before/after AI assistance)
Biliary tract (n = 23)95.7% / 95.7%91.3% / 91.3%
Colorectum (n = 23)91.3% / 87.0%82.6% / 95.7%
Liver (n = 23)91.3% / 91.3%78.3% / 82.6%
Pancreas (n = 21)81.0% / 95.2%85.7% / 100.0%
Prostate (n = 22)95.5% / 90.9%86.4% / 86.4%
Stomach (n = 23)87.0% / 95.7%82.6% / 95.7%
Total (n = 135)90.4% / 92.6%84.4% / 91.9%

Disclaimer

This material on this page is ©2024 American Society of Clinical Oncology, all rights reserved. Licensing available upon request. For more information, please contact licensing@asco.org

Abstract Details

Meeting

2023 ASCO Annual Meeting

Session Type

Publication Only

Session Title

Publication Only: Care Delivery and Regulatory Policy

Track

Care Delivery and Quality Care

Sub Track

Clinical Informatics/Advanced Algorithms/Machine Learning

Citation

J Clin Oncol 41, 2023 (suppl 16; abstr e13553)

DOI

10.1200/JCO.2023.41.16_suppl.e13553

Abstract #

e13553

Abstract Disclosures