[1] P. Bachman, O. Alsharif, and D. Precup. Learning with
pseudo-ensembles. In Advances in Neural Information Pro-
cessing Systems, pages 3365β3373, 2014.
[2] W. Bai, O. Oktay, M. Sinclair, H. Suzuki, M. Rajchl, G. Tar-
roni, B. Glocker, A. King, P. M. Matthews, and D. Rueck-
ert. Semi-supervised learning for network-based cardiac mr
image segmentation. In International Conference on Med-
ical Image Computing and Computer-Assisted Intervention,
pages 253β260. Springer, 2017.
[3] M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regular-
ization: A geometric framework for learning from labeled
and unlabeled examples. Journal of machine learning re-
search, 7(Nov):2399β2434, 2006.
[4] A. Blake, R. Curwen, and A. Zisserman. A framework for
spatiotemporal control in the tracking of visual contours.
International Journal of Computer Vision, 11(2):127β145,
1993.
[5] A. Blum and T. Mitchell. Combining labeled and unlabeled
data with co-training. In Proceedings of the eleventh an-
nual conference on Computational learning theory, pages
92β100. ACM, 1998.
[6] J. Carreira and A. Zisserman. Quo vadis, action recognition?
a new model and the kinetics dataset. In Computer Vision
and Pattern Recognition (CVPR), 2017 IEEE Conference on,
pages 4724β4733. IEEE, 2017.
[7] D.-D. Chen, W. Wang, W. Gao, and Z.-H. Zhou. Tri-
net for semi-supervised deep learning. In Proceedings of
the 27th International Joint Conference on Artificial Intel-
ligence, pages 2014β2020. AAAI Press, 2018.
[8] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and
A. L. Yuille. Deeplab: Semantic image segmentation with
deep convolutional nets, atrous convolution, and fully con-
nected crfs. IEEE transactions on pattern analysis and ma-
chine intelligence, 40(4):834β848, 2018.
[9] L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam. Re-
thinking atrous convolution for semantic image segmenta-
tion. arXiv preprint arXiv:1706.05587, 2017.
[10] V. Cheplygina, M. de Bruijne, and J. P. Pluim. Not-so-
supervised: a survey of semi-supervised, multi-instance, and
transfer learning in medical image analysis. arXiv preprint
arXiv:1804.06353, 2018.
[11] N. Dong, M. Kampffmeyer, X. Liang, Z. Wang, W. Dai, and
E. Xing. Unsupervised domain adaptation for automatic es-
timation of cardiothoracic ratio. In International Conference
on Medical Image Computing and Computer-Assisted Inter-
vention, pages 544β552. Springer, 2018.
[12] Y. Gal. Uncertainty in deep learning. University of Cam-
bridge, 2016.
[13] Y. Gal and Z. Ghahramani. Dropout as a bayesian approxi-
mation: Representing model uncertainty in deep learning. In
international conference on machine learning, pages 1050β
1059, 2016.
[14] I. Goodfellow, J. Shlens, and C. Szegedy. Explaining and
harnessing adversarial examples. In International Confer-
ence on Learning Representations, 2015.
[15] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learn-
ing for image recognition. In Proceedings of the IEEE con-
ference on computer vision and pattern recognition, pages
770β778, 2016.
[16] X. He, R. S. Zemel, and M. οΏ½A. Carreira-PerpiΛnοΏ½n. Multiscale
conditional random fields for image labeling. In Computer
vision and pattern recognition, 2004. CVPR 2004. Proceed-
ings of the 2004 IEEE computer society conference on, vol-
ume 2, pages IIβII. IEEE, 2004.
[17] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Wein-
berger. Densely connected convolutional networks. In Pro-
ceedings of the IEEE conference on computer vision and pat-
tern recognition, pages 4700β4708, 2017.
[18] J. Jiang, Y.-C. Hu, N. Tyagi, P. Zhang, A. Rimner, G. S.
Mageras, J. O. Deasy, and H. Veeraraghavan. Tumor-aware,
adversarial domain adaptation from ct to mri for lung cancer
segmentation. In International Conference on Medical Im-
age Computing and Computer-Assisted Intervention, pages
777β785. Springer, 2018.
[19] A. Kendall and Y. Gal. What uncertainties do we need in
bayesian deep learning for computer vision? In Advances
in neural information processing systems, pages 5574β5584,
2017.
[20] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet
classification with deep convolutional neural networks. In
Advances in neural information processing systems, pages
1097β1105, 2012.
[21] S. Laine and T. Aila. Temporal ensembling for semi-
supervised learning. ICLR, 2016.
[22] X. Li, H. Chen, X. Qi, Q. Dou, C.-W. Fu, and P. A. Heng. H-
denseunet: Hybrid densely connected unet for liver and liver
tumor segmentation from ct volumes. IEEE Transactions on
Medical Imaging, 2017.
[23] X. Li, L. Yu, H. Chen, C.-W. Fu, and P.-A. Heng. Semi-
supervised skin lesion segmentation via transformation con-
sistent self-ensembling model. BMVC, 2018.
[24] X. Li, L. Yu, H. Chen, C.-W. Fu, and P.-A. Heng.
Transformation consistent self-ensembling model for semi-
supervised medical image segmentation. arXiv preprint
arXiv:1903.00348, 2019.
[25] S. Liu, D. Xu, S. K. Zhou, O. Pauly, S. Grbic, T. Mertelmeier,
J. Wicklein, A. Jerebko, W. Cai, and D. Comaniciu. 3d
anisotropic hybrid network: Transferring convolutional fea-
tures from 2d images to 3d anisotropic volumes. In In-
ternational Conference on Medical Image Computing and
Computer-Assisted Intervention, pages 851β858. Springer,
2018.
[26] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional
networks for semantic segmentation. In Proceedings of the
IEEE conference on computer vision and pattern recogni-
tion, pages 3431β3440, 2015.
[27] F. Milletari, N. Navab, and S.-A. Ahmadi. V-net: Fully
convolutional neural networks for volumetric medical image
segmentation. In 2016 Fourth International Conference on
3D Vision (3DV), pages 565β571. IEEE, 2016.
[28] T. Miyato, S.-i. Maeda, S. Ishii, and M. Koyama. Virtual
adversarial training: a regularization method for supervised