Self-supervised Siamese Autoencoders

Publikation: Beiträge in SammelwerkenAufsätze in KonferenzbändenForschungbegutachtet

Authors

  • Friederike Baier
  • Sebastian Mair
  • Samuel G. Fadel

In contrast to fully-supervised models, self-supervised representation learning only needs a fraction of data to be labeled and often achieves the same or even higher downstream performance. The goal is to pre-train deep neural networks on a self-supervised task, making them able to extract meaningful features from raw input data afterwards. Previously, autoencoders and Siamese networks have been successfully employed as feature extractors for tasks such as image classification. However, both have their individual shortcomings and benefits. In this paper, we combine their complementary strengths by proposing a new method called SidAE (Siamese denoising autoencoder). Using an image classification downstream task, we show that our model outperforms two self-supervised baselines across multiple data sets and scenarios. Crucially, this includes conditions in which only a small amount of labeled data is available. Empirically, the Siamese component has more impact, but the denoising autoencoder is nevertheless necessary to improve performance.

OriginalspracheEnglisch
TitelAdvances in Intelligent Data Analysis XXII : 22nd International Symposium on Intelligent Data Analysis, IDA 2024, Proceedings
HerausgeberIoanna Miliou, Panagiotis Papapetrou, Nico Piatkowski
Anzahl der Seiten12
VerlagSpringer Science and Business Media Deutschland GmbH
Erscheinungsdatum16.04.2024
Seiten117-128
ISBN (Print)978-3-031-58546-3
ISBN (elektronisch)978-3-031-58547-0
DOIs
PublikationsstatusErschienen - 16.04.2024
Veranstaltung22nd International Symposium on Intelligent Data Analysis, IDA 2024 - Stockholm, Schweden
Dauer: 24.04.202426.04.2024

Bibliographische Notiz

Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.

DOI