Skip to main content

Multimodal Self-supervised Learning for Medical Image Analysis

  • Conference paper
  • First Online:
Information Processing in Medical Imaging (IPMI 2021)

Abstract

Self-supervised learning approaches leverage unlabeled samples to acquire generic knowledge about different concepts, hence allowing for annotation-efficient downstream task learning. In this paper, we propose a novel self-supervised method that leverages multiple imaging modalities. We introduce the multimodal puzzle task, which facilitates representation learning from multiple image modalities. The learned modality-agnostic representations are obtained by confusing image modalities at the data-level. Together with the Sinkhorn operator, with which we formulate the puzzle solving optimization as permutation matrix inference instead of classification, they allow for efficient solving of multimodal puzzles with varying levels of complexity. In addition, we also propose to utilize generation techniques for multimodal data augmentation used for self-supervised pretraining, instead of downstream tasks directly. This aims to circumvent quality issues associated with synthetic images, while improving data-efficiency and the representations learned by self-supervised methods. Our experimental results show that solving our multimodal puzzles yields better semantic representations, compared to treating each modality independently. Our results also highlight the benefits of exploiting synthetic images for self-supervised pretraining. We showcase our approach on three segmentation tasks, and we outperform many solutions and our results are competitive to state-of-the-art.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
eBook
USD 89.00
Price excludes VAT (USA)
Softcover Book
USD 119.99
Price excludes VAT (USA)

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    We Evaluate on realistic data in this section, using a 5-fold cross validation approach.

  2. 2.

    In fine-tuning, we use the same multimodal data across all models.

  3. 3.

    Our aim is to benchmark our method against a proven image registration method.

  4. 4.

    Alternatively, all modalities can be generated from each other, requiring many GANs.

References

  1. Bakas, S., et al.: Advancing the cancer genome atlas glioma MRI collections with expert segmentation labels and radiomic features. Sci. Data 4, 1–13 (2017)

    Google Scholar 

  2. Balakrishnan, G., Zhao, A., Sabuncu, M., Guttag, J., Dalca, A.: Voxelmorph: a learning framework for deformable medical image registration. IEEE Trans. Med. Imaging 1 (2019). https://doi.org/10.1109/TMI.2019.2897538

  3. Baltrušaitis, T., Ahuja, C., Morency, L.P.: Multimodal machine learning: a survey and taxonomy. IEEE Trans. Pattern Anal. Mach. Intell. 41(2), 423–443 (2019)

    Article  Google Scholar 

  4. Carlucci, F.M., D’Innocente, A., Bucci, S., Caputo, B., Tommasi, T.: Domain generalization by solving jigsaw puzzles. In: CVPR, IEEE (2019)

    Google Scholar 

  5. Caron, M., Bojanowski, P., Joulin, A., Douze, M.: Deep clustering for unsupervised learning of visual features. In: The European Conference on Computer Vision (ECCV). Springer, Munich, Germany (September 2018)

    Google Scholar 

  6. Chang, Y.J., Lin, Z.S., Yang, T.L., Huang, T.Y.: Automatic segmentation of brain tumor from 3d MR images using a 2d convolutional neural networks. In: Pre-Conference Proceedings of the 7th MICCAI BraTS Challenge, Springer (2018)

    Google Scholar 

  7. Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations (2020)

    Google Scholar 

  8. Doersch, C., Gupta, A., Efros, A.A.: Unsupervised visual representation learning by context prediction. In: ICCV, pp. 1422–1430. IEEE, USA (2015)

    Google Scholar 

  9. Eisenberg, R., Margulis, A.: A Patient’s Guide to Medical Imaging. Oxford University Press, New York (2011)

    Google Scholar 

  10. Fu, C., et al.: Three dimensional fluorescence microscopy image synthesis and segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (06 2018). https://doi.org/10.1109/CVPRW.2018.00298

  11. Gidaris, S., Singh, P., Komodakis, N.: Unsupervised representation learning by predicting image rotations. CoRR arXiv abs/1803.07728 (2018)

    Google Scholar 

  12. Isensee, F., Kickingereder, P., Wick, W., Bendszus, M., Maier-Hein, K.H.: No new-net. In: Crimi, A., Bakas, S., Kuijf, H., Keyvan, F., Reyes, M., van Walsum, T. (eds.) BrainLes 2018. LNCS, vol. 11384, pp. 234–244. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-11726-9_21

    Chapter  Google Scholar 

  13. Isola, P., Zhu, J., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5967–5976. IEEE, Honolulu, Hawaii, USA (2017)

    Google Scholar 

  14. Jamaludin, A., Kadir, T., Zisserman, A.: Self-supervised learning for spinal MRIs. In: Cardoso, M.J., et al. (eds.) DLMIA/ML-CDS -2017. LNCS, vol. 10553, pp. 294–302. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-67558-9_34

    Chapter  Google Scholar 

  15. Kavur, A.E., Selver, M.A., Dicle, O., Barış, M., Gezer, N.S.: CHAOS - Combined (CT-MR) Healthy Abdominal Organ Segmentation Challenge Data (April 2019). https://doi.org/10.5281/zenodo.3362844

  16. Li, H., Fan, Y.: Non-rigid image registration using self-supervised fully convolutional networks without training data. In: ISBI, pp. 1075–1078. IEEE (April 2018)

    Google Scholar 

  17. Menze, B.H., et al.: The multimodal brain tumor image segmentation benchmark (brats). IEEE Trans. Med. Imaging 34(10), 1993–2024 (2015)

    Article  Google Scholar 

  18. Noroozi, M., Favaro, P.: Unsupervised learning of visual representations by solving jigsaw puzzles. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9910, pp. 69–84. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46466-4_5

    Chapter  Google Scholar 

  19. van den Oord, A., Li, Y., Vinyals, O.: Representation learning with contrastive predictive coding. CoRR arXiv abs/1807.03748 (2018)

    Google Scholar 

  20. Simpson, A.L., et al.: A large annotated medical image dataset for the development and evaluation of segmentation algorithms. CoRR arXiv abs/1902.09063 (2019)

    Google Scholar 

  21. Sinkhorn, R.: A relationship between arbitrary positive matrices and doubly stochastic matrices. Ann. Math. Stat. 35(2), 876–879 (1964)

    Article  MathSciNet  Google Scholar 

  22. Sun, Y., Tzeng, E., Darrell, T., Efros, A.A.: Unsupervised domain adaptation through self-supervision (2019)

    Google Scholar 

  23. Tajbakhsh, N., et al.: Surrogate supervision for medical image analysis: Effective deep learning from limited quantities of labeled data. In: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), pp. 1251–1255 (2019)

    Google Scholar 

  24. Taleb, A., et al.: 3d self-supervised methods for medical imaging. In: NeurIPS (2020)

    Google Scholar 

  25. Wolterink, J.M., Dinkla, A.M., Savenije, M.H.F., Seevinck, P.R., van den Berg, C.A.T., Išgum, I.: Deep MR to CT synthesis using unpaired data. In: Tsaftaris, S.A., Gooya, A., Frangi, A.F., Prince, J.L. (eds.) SASHIMI 2017. LNCS, vol. 10557, pp. 14–23. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68127-6_2

    Chapter  Google Scholar 

  26. Yang, H., et al.: Unpaired brain MR-to-CT synthesis using a structure-constrained CycleGAN. In: Stoyanov, D., et al. (eds.) DLMIA/ML-CDS -2018. LNCS, vol. 11045, pp. 174–182. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00889-5_20

    Chapter  Google Scholar 

  27. Zhang, P., Wang, F., Zheng, Y.: Self-supervised deep representation learning for fine-grained body part recognition. In: ISBI, pp. 578–582. IEEE (April 2017)

    Google Scholar 

  28. Zhang, R., Isola, P., Efros, A.A.: Colorful image colorization. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 649–666. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46487-9_40

    Chapter  Google Scholar 

  29. Zhou, Z., et al.: Models genesis: generic autodidactic models for 3D medical image analysis. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11767, pp. 384–393. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32251-9_42

    Chapter  Google Scholar 

  30. Zhu, J., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2242–2251. IEEE, Venice, Italy (2017)

    Google Scholar 

  31. Zhuang, X., Li, Y., Hu, Y., Ma, K., Yang, Y., Zheng, Y.: Self-supervised feature learning for 3D medical images by playing a Rubik’s cube. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11767, pp. 420–428. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32251-9_46

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Aiham Taleb .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Taleb, A., Lippert, C., Klein, T., Nabi, M. (2021). Multimodal Self-supervised Learning for Medical Image Analysis. In: Feragen, A., Sommer, S., Schnabel, J., Nielsen, M. (eds) Information Processing in Medical Imaging. IPMI 2021. Lecture Notes in Computer Science(), vol 12729. Springer, Cham. https://doi.org/10.1007/978-3-030-78191-0_51

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-78191-0_51

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-78190-3

  • Online ISBN: 978-3-030-78191-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics