Self-paced contrastive learning for semi-supervised medical image segmentation with meta-labels
The contrastive pre-training of a recognition model on a large dataset of unlabeled data
often boosts the model's performance on downstream tasks like image classification�…
often boosts the model's performance on downstream tasks like image classification�…
Dense contrastive learning for self-supervised visual pre-training
To date, most existing self-supervised learning methods are designed and optimized for
image classification. These pre-trained models can be sub-optimal for dense prediction�…
image classification. These pre-trained models can be sub-optimal for dense prediction�…
Leverage your local and global representations: A new self-supervised learning strategy
Self-supervised learning (SSL) methods aim to learn view-invariant representations by
maximizing the similarity between the features extracted from different crops of the same�…
maximizing the similarity between the features extracted from different crops of the same�…
[HTML][HTML] Survey on self-supervised learning: auxiliary pretext tasks and contrastive learning methods in imaging
S Albelwi�- Entropy, 2022 - mdpi.com
Although deep learning algorithms have achieved significant progress in a variety of
domains, they require costly annotations on huge datasets. Self-supervised learning (SSL)�…
domains, they require costly annotations on huge datasets. Self-supervised learning (SSL)�…
Dabs: A domain-agnostic benchmark for self-supervised learning
Self-supervised learning algorithms, including BERT and SimCLR, have enabled significant
strides in fields like natural language processing, computer vision, and speech processing�…
strides in fields like natural language processing, computer vision, and speech processing�…
Subtab: Subsetting features of tabular data for self-supervised representation learning
T Ucar, E Hajiramezanali…�- Advances in Neural�…, 2021 - proceedings.neurips.cc
Self-supervised learning has been shown to be very effective in learning useful
representations, and yet much of the success is achieved in data types such as images�…
representations, and yet much of the success is achieved in data types such as images�…
Dual-domain self-supervised learning for accelerated non-Cartesian MRI reconstruction
While enabling accelerated acquisition and improved reconstruction accuracy, current deep
MRI reconstruction networks are typically supervised, require fully sampled data, and are�…
MRI reconstruction networks are typically supervised, require fully sampled data, and are�…
Vicreg: Variance-invariance-covariance regularization for self-supervised learning
Recent self-supervised methods for image representation learning are based on maximizing
the agreement between embedding vectors from different views of the same image. A trivial�…
the agreement between embedding vectors from different views of the same image. A trivial�…
Self pre-training with masked autoencoders for medical image classification and segmentation
Masked Autoencoder (MAE) has recently been shown to be effective in pre-training Vision
Transformers (ViT) for natural image analysis. By reconstructing full images from partially�…
Transformers (ViT) for natural image analysis. By reconstructing full images from partially�…
Self-supervised learning of remote sensing scene representations using contrastive multiview coding
V Stojnic, V Risojevic�- …�of the IEEE/CVF Conference on�…, 2021 - openaccess.thecvf.com
In recent years self-supervised learning has emerged as a promising candidate for
unsupervised representation learning. In the visual domain its applications are mostly�…
unsupervised representation learning. In the visual domain its applications are mostly�…