Abstract
Domain shifts pose significant challenges for deploying medical image analysis models in clinical settings, where performance degradation can have serious consequences. Traditional approaches for estimating model performance under domain shifts often rely on global confidence scores, which may not accurately reflect performance across different classes. In this work, we propose a novel approach for estimating model performance under domain shifts using class-specific confidence scores.
Our method introduces a class-specific confidence estimation framework that provides more granular performance predictions for each class. The approach consists of two main components: (1) a class-specific confidence estimator that learns to predict performance for individual classes, and (2) a domain shift detector that identifies when the model is operating in a shifted domain.
We evaluate our approach on multiple medical image segmentation datasets with known domain shifts, including cross-scanner and cross-institution scenarios. Experimental results demonstrate that our class-specific confidence scores provide more accurate performance estimates compared to global confidence measures. The method shows particular effectiveness for minority classes, which are often most affected by domain shifts.
The proposed framework represents a significant advancement in domain shift detection and performance estimation, providing more reliable confidence measures that could improve the safe deployment of medical AI systems.
BibTeX
@inproceedings{li2022ds,
title={Estimating Model Performance under Domain Shifts with Class-Specific Confidence Scores},
author={Li, Zeju and Kamnitsas, Konstantinos and Islam, Mobarakol and Chen, Chen and Glocker, Ben},
booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2022)},
year={2022},
doi={10.1007/978-3-031-16449-1_66}
}