Abstract
Semantic segmentation in medical imaging often faces challenges with background class representation, which can significantly impact segmentation accuracy. Traditional approaches treat background as a single class, failing to capture the rich contextual information present in medical images. In this work, we propose Context Label Learning, a novel framework that improves background class representations by learning context-aware labels.
Our approach introduces a context-aware labeling mechanism that dynamically adapts background representations based on the surrounding anatomical structures. The framework consists of two key components: (1) a context encoder that captures spatial relationships between foreground and background regions, and (2) a label refinement module that generates context-specific background labels.
We evaluate our method on multiple medical image segmentation datasets, including brain tumor segmentation and multi-organ segmentation. Experimental results demonstrate that Context Label Learning significantly improves segmentation accuracy, particularly for challenging cases where background context is crucial for accurate delineation.
The proposed framework provides a more principled approach to handling background classes in medical image segmentation, leading to improved performance and better clinical interpretability.
BibTeX
@article{li2022colab,
title={Context Label Learning: Improving Background Class Representations in Semantic Segmentation},
author={Li, Zeju and Kamnitsas, Konstantinos and Ouyang, Cheng and Chen, Chen and Glocker, Ben},
journal={IEEE Transactions on Medical Imaging},
year={2023},
publisher={IEEE},
doi={10.1109/TMI.2023.3242838}
}