Self-supervised Siamese Autoencoders
Publikation: Beiträge in Sammelwerken › Aufsätze in Konferenzbänden › Forschung › begutachtet
Standard
Advances in Intelligent Data Analysis XXII: 22nd International Symposium on Intelligent Data Analysis, IDA 2024, Proceedings. Hrsg. / Ioanna Miliou; Panagiotis Papapetrou; Nico Piatkowski. Springer Science and Business Media Deutschland GmbH, 2024. S. 117-128 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Band 14641 LNCS).
Publikation: Beiträge in Sammelwerken › Aufsätze in Konferenzbänden › Forschung › begutachtet
Harvard
APA
Vancouver
Bibtex
}
RIS
TY - CHAP
T1 - Self-supervised Siamese Autoencoders
AU - Baier, Friederike
AU - Mair, Sebastian
AU - Fadel, Samuel G.
N1 - Conference code: 22
PY - 2024
Y1 - 2024
N2 - In contrast to fully-supervised models, self-supervised representation learning only needs a fraction of data to be labeled and often achieves the same or even higher downstream performance. The goal is to pre-train deep neural networks on a self-supervised task, making them able to extract meaningful features from raw input data afterwards. Previously, autoencoders and Siamese networks have been successfully employed as feature extractors for tasks such as image classification. However, both have their individual shortcomings and benefits. In this paper, we combine their complementary strengths by proposing a new method called SidAE (Siamese denoising autoencoder). Using an image classification downstream task, we show that our model outperforms two self-supervised baselines across multiple data sets and scenarios. Crucially, this includes conditions in which only a small amount of labeled data is available. Empirically, the Siamese component has more impact, but the denoising autoencoder is nevertheless necessary to improve performance.
AB - In contrast to fully-supervised models, self-supervised representation learning only needs a fraction of data to be labeled and often achieves the same or even higher downstream performance. The goal is to pre-train deep neural networks on a self-supervised task, making them able to extract meaningful features from raw input data afterwards. Previously, autoencoders and Siamese networks have been successfully employed as feature extractors for tasks such as image classification. However, both have their individual shortcomings and benefits. In this paper, we combine their complementary strengths by proposing a new method called SidAE (Siamese denoising autoencoder). Using an image classification downstream task, we show that our model outperforms two self-supervised baselines across multiple data sets and scenarios. Crucially, this includes conditions in which only a small amount of labeled data is available. Empirically, the Siamese component has more impact, but the denoising autoencoder is nevertheless necessary to improve performance.
KW - denoising autoencoder
KW - image classification
KW - pre-training
KW - representation learning
KW - Self-supervised learning
KW - Siamese networks
KW - Informatics
KW - Business informatics
UR - http://www.scopus.com/inward/record.url?scp=85192241043&partnerID=8YFLogxK
UR - https://www.mendeley.com/catalogue/99889785-310e-31ca-8786-f93a9453f8b6/
U2 - 10.1007/978-3-031-58547-0_10
DO - 10.1007/978-3-031-58547-0_10
M3 - Article in conference proceedings
AN - SCOPUS:85192241043
SN - 978-3-031-58546-3
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 117
EP - 128
BT - Advances in Intelligent Data Analysis XXII
A2 - Miliou, Ioanna
A2 - Papapetrou, Panagiotis
A2 - Piatkowski, Nico
PB - Springer Science and Business Media Deutschland GmbH
T2 - 22nd International Symposium on Intelligent Data Analysis - IDA 2024
Y2 - 24 April 2024 through 26 April 2024
ER -