Self-paced contrastive learning for semi-supervised medical image segmentation with meta-labels

J Peng, P Wang, C Desrosiers…�- Advances in Neural�…, 2021 - proceedings.neurips.cc
The contrastive pre-training of a recognition model on a large dataset of unlabeled data
often boosts the model's performance on downstream tasks like image classification�…

Dense contrastive learning for self-supervised visual pre-training

X Wang, R Zhang, C Shen…�- Proceedings of the�…, 2021 - openaccess.thecvf.com
To date, most existing self-supervised learning methods are designed and optimized for
image classification. These pre-trained models can be sub-optimal for dense prediction�…

Leverage your local and global representations: A new self-supervised learning strategy

T Zhang, C Qiu, W Ke, S S�sstrunk…�- Proceedings of the�…, 2022 - openaccess.thecvf.com
Self-supervised learning (SSL) methods aim to learn view-invariant representations by
maximizing the similarity between the features extracted from different crops of the same�…

[HTML][HTML] Survey on self-supervised learning: auxiliary pretext tasks and contrastive learning methods in imaging

S Albelwi�- Entropy, 2022 - mdpi.com
Although deep learning algorithms have achieved significant progress in a variety of
domains, they require costly annotations on huge datasets. Self-supervised learning (SSL)�…

Dabs: A domain-agnostic benchmark for self-supervised learning

A Tamkin, V Liu, R Lu, D Fein, C Schultz…�- arXiv preprint arXiv�…, 2021 - arxiv.org
Self-supervised learning algorithms, including BERT and SimCLR, have enabled significant
strides in fields like natural language processing, computer vision, and speech processing�…

Subtab: Subsetting features of tabular data for self-supervised representation learning

T Ucar, E Hajiramezanali…�- Advances in Neural�…, 2021 - proceedings.neurips.cc
Self-supervised learning has been shown to be very effective in learning useful
representations, and yet much of the success is achieved in data types such as images�…

Dual-domain self-supervised learning for accelerated non-Cartesian MRI reconstruction

B Zhou, J Schlemper, N Dey, SSM Salehi, K Sheth…�- Medical Image�…, 2022 - Elsevier
While enabling accelerated acquisition and improved reconstruction accuracy, current deep
MRI reconstruction networks are typically supervised, require fully sampled data, and are�…

Vicreg: Variance-invariance-covariance regularization for self-supervised learning

A Bardes, J Ponce, Y LeCun�- arXiv preprint arXiv:2105.04906, 2021 - arxiv.org
Recent self-supervised methods for image representation learning are based on maximizing
the agreement between embedding vectors from different views of the same image. A trivial�…

Self pre-training with masked autoencoders for medical image classification and segmentation

L Zhou, H Liu, J Bae, J He, D Samaras…�- 2023 IEEE 20th�…, 2023 - ieeexplore.ieee.org
Masked Autoencoder (MAE) has recently been shown to be effective in pre-training Vision
Transformers (ViT) for natural image analysis. By reconstructing full images from partially�…

Self-supervised learning of remote sensing scene representations using contrastive multiview coding

V Stojnic, V Risojevic�- …�of the IEEE/CVF Conference on�…, 2021 - openaccess.thecvf.com
In recent years self-supervised learning has emerged as a promising candidate for
unsupervised representation learning. In the visual domain its applications are mostly�…