Abstract
Breast cancer diagnosis often requires information from multiple imaging modalities, with MRI and ultrasound providing complementary information about tumor characteristics. However, effectively combining information from different modalities remains challenging due to their inherent differences in image characteristics and feature representations. In this work, we propose a novel approach for breast tumor classification that disentangles modality-specific features while leveraging complementary information.
Our framework introduces a modality disentanglement module that separates modality-specific and modality-invariant features from MRI and ultrasound images. The approach consists of three key components: (1) a modality-specific encoder that extracts unique features from each modality, (2) a modality-invariant encoder that captures shared characteristics, and (3) a fusion module that combines both feature types for final classification.
We evaluate our method on a comprehensive dataset of breast tumor cases with both MRI and ultrasound imaging. Experimental results demonstrate that our modality disentanglement approach significantly improves classification accuracy compared to traditional multi-modal fusion methods. The method shows robust performance across different tumor types and imaging conditions.
The proposed framework represents a significant advancement in multi-modal breast cancer diagnosis, providing more accurate classification while maintaining interpretability of modality-specific contributions.
BibTeX
@article{qiao2022breast,
title={Breast Tumor Classification based on MRI-US Images by Disentangling Modality Features},
author={Qiao, Mengyun and Liu, Chengcheng and Li, Zeju and Zhou, Jin and Xiao, Qin and Zhou, Shichong and Chang, Cai and Yajia, Gu and Guo, Yi and Wang, Yuanyuan},
journal={IEEE Journal of Biomedical and Health Informatics},
year={2022},
publisher={IEEE},
doi={10.1109/JBHI.2022.3140236}
}