Rethinking CNN models for audio classification

K Palanisamy, D Singhania, A Yao�- arXiv preprint arXiv:2007.11154, 2020 - arxiv.org
In this paper, we show that ImageNet-Pretrained standard deep CNN models can be used
as strong baseline networks for audio classification. Even though there is a significant�…

Prior: Prototype representation joint learning from medical images and reports

P Cheng, L Lin, J Lyu, Y Huang…�- Proceedings of the�…, 2023 - openaccess.thecvf.com
Contrastive learning based vision-language joint pre-training has emerged as a successful
representation learning strategy. In this paper, we present a prototype representation�…

Exploring the limits of large scale pre-training

S Abnar, M Dehghani, B Neyshabur…�- arXiv preprint arXiv�…, 2021 - arxiv.org
Recent developments in large-scale machine learning suggest that by scaling up data,
model size and training time properly, one might observe that improvements in pre-training�…

Deep facial diagnosis: deep transfer learning from face recognition to facial diagnosis

B Jin, L Cruz, N Gon�alves�- IEEE Access, 2020 - ieeexplore.ieee.org
The relationship between face and disease has been discussed from thousands years ago,
which leads to the occurrence of facial diagnosis. The objective here is to explore the�…

[HTML][HTML] Review of the state of the art of deep learning for plant diseases: A broad analysis and discussion

RI Hasan, SM Yusuf, L Alzubaidi�- Plants, 2020 - mdpi.com
Deep learning (DL) represents the golden era in the machine learning (ML) domain, and it
has gradually become the leading approach in many fields. It is currently playing a vital role�…

3d self-supervised methods for medical imaging

A Taleb, W Loetzsch, N Danz…�- Advances in neural�…, 2020 - proceedings.neurips.cc
Self-supervised learning methods have witnessed a recent surge of interest after proving
successful in multiple application fields. In this work, we leverage these techniques, and we�…

How does learning rate decay help modern neural networks?

K You, M Long, J Wang, MI Jordan�- arXiv preprint arXiv:1908.01878, 2019 - arxiv.org
Learning rate decay (lrDecay) is a\emph {de facto} technique for training modern neural
networks. It starts with a large learning rate and then decays it multiple times. It is empirically�…

Automatic severity classification of diabetic retinopathy based on densenet and convolutional block attention module

MM Farag, M Fouad, AT Abdel-Hamid�- IEEE Access, 2022 - ieeexplore.ieee.org
Diabetic Retinopathy (DR)-a complication developed due to heightened blood glucose
levels-is deemed one of the most sight-threatening diseases. Unfortunately, DR screening is�…

Natural synthetic anomalies for self-supervised anomaly detection and localization

HM Schl�ter, J Tan, B Hou, B Kainz�- European Conference on Computer�…, 2022 - Springer
We introduce a simple and intuitive self-supervision task, Natural Synthetic Anomalies
(NSA), for training an end-to-end model for anomaly detection and localization using only�…

Dive into the details of self-supervised learning for medical image analysis

C Zhang, H Zheng, Y Gu�- Medical Image Analysis, 2023 - Elsevier
Self-supervised learning (SSL) has achieved remarkable performance in various medical
imaging tasks by dint of priors from massive unlabeled data. However, regarding a specific�…