Abstract
Medical image segmentation often faces challenges with domain shifts and style variations that can significantly impact model performance. Traditional approaches may struggle to maintain robustness across different imaging styles and protocols. In this work, we propose MaxStyle, an adversarial style composition approach for robust medical image segmentation.
Our method introduces an adversarial training framework that generates diverse style variations to improve model robustness. The framework consists of three key components: (1) a style composition module that creates diverse imaging styles, (2) an adversarial training mechanism that challenges the segmentation model, and (3) a robust segmentation network that learns to handle style variations.
We evaluate our approach on multiple medical imaging datasets with style variations, including cross-scanner and cross-institution scenarios. Experimental results demonstrate that our MaxStyle approach significantly improves segmentation robustness compared to traditional methods. The method shows excellent performance in handling style variations while maintaining segmentation accuracy.
The proposed framework represents a significant advancement in robust medical image segmentation, providing more reliable results that could improve clinical applications across different imaging protocols.
BibTeX
@inproceedings{chen2022ro,
title={MaxStyle: Adversarial Style Composition for Robust Medical Image Segmentation},
author={Chen, Chen and Li, Zeju and Ouyang, Cheng and Sinclair, Matt and Bai, Wenjia and Rueckert, Daniel},
booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2022)},
year={2022},
doi={10.1007/978-3-031-16443-9_15}
}