Skip to Main content Skip to Navigation
Conference papers

Knowledge distillation from multi-modal to mono-modal segmentation networks

Abstract : The joint use of multiple imaging modalities for medical image segmentation has been widely studied in recent years. The fusion of information from different modalities has demonstrated to improve the segmentation accuracy, with respect to mono-modal segmentations, in several applications. However, acquiring multiple modalities is usually not possible in a clinical setting due to a limited number of physicians and scanners, and to limit costs and scan time. Most of the time, only one modality is acquired. In this paper, we propose KD-Net, a framework to transfer knowledge from a trained multi-modal network (teacher) to a mono-modal one (student). The proposed method is an adaptation of the generalized distillation framework where the student network is trained on a subset (1 modality) of the teacher's inputs (n modalities). We illustrate the effectiveness of the proposed framework in brain tumor segmentation with the BraTS 2018 dataset. Using different architectures, we show that the student network effectively learns from the teacher and always outperforms the baseline mono-modal network in terms of seg-mentation accuracy.
Complete list of metadatas

Cited literature [15 references]  Display  Hide  Download

https://hal.telecom-paris.fr/hal-02899529
Contributor : Matthis Maillard <>
Submitted on : Wednesday, July 15, 2020 - 11:45:44 AM
Last modification on : Monday, August 17, 2020 - 2:22:03 PM

File

paper2614.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-02899529, version 1

Collections

Citation

Minhao Hu, Matthis Maillard, Ya Zhang, Tommaso Ciceri, Giammarco Barbera, et al.. Knowledge distillation from multi-modal to mono-modal segmentation networks. MICCAI, Oct 2020, Lima, Peru. ⟨hal-02899529⟩

Share

Metrics

Record views

85

Files downloads

158