Self-supervised Siamese Autoencoders

Publikation: Beiträge in SammelwerkenAufsätze in KonferenzbändenForschungbegutachtet

Standard

Self-supervised Siamese Autoencoders. / Baier, Friederike; Mair, Sebastian; Fadel, Samuel G.
Advances in Intelligent Data Analysis XXII: 22nd International Symposium on Intelligent Data Analysis, IDA 2024, Proceedings. Hrsg. / Ioanna Miliou; Panagiotis Papapetrou; Nico Piatkowski. Springer Science and Business Media Deutschland GmbH, 2024. S. 117-128 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Band 14641 LNCS).

Publikation: Beiträge in SammelwerkenAufsätze in KonferenzbändenForschungbegutachtet

Harvard

Baier, F, Mair, S & Fadel, SG 2024, Self-supervised Siamese Autoencoders. in I Miliou, P Papapetrou & N Piatkowski (Hrsg.), Advances in Intelligent Data Analysis XXII: 22nd International Symposium on Intelligent Data Analysis, IDA 2024, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Bd. 14641 LNCS, Springer Science and Business Media Deutschland GmbH, S. 117-128, 22nd International Symposium on Intelligent Data Analysis - IDA 2024, Stockholm, Schweden, 24.04.24. https://doi.org/10.1007/978-3-031-58547-0_10

APA

Baier, F., Mair, S., & Fadel, S. G. (2024). Self-supervised Siamese Autoencoders. In I. Miliou, P. Papapetrou, & N. Piatkowski (Hrsg.), Advances in Intelligent Data Analysis XXII: 22nd International Symposium on Intelligent Data Analysis, IDA 2024, Proceedings (S. 117-128). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Band 14641 LNCS). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-58547-0_10

Vancouver

Baier F, Mair S, Fadel SG. Self-supervised Siamese Autoencoders. in Miliou I, Papapetrou P, Piatkowski N, Hrsg., Advances in Intelligent Data Analysis XXII: 22nd International Symposium on Intelligent Data Analysis, IDA 2024, Proceedings. Springer Science and Business Media Deutschland GmbH. 2024. S. 117-128. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). Epub 2024 Apr 16. doi: 10.1007/978-3-031-58547-0_10

Bibtex

@inbook{3f08d33ada4a45ec97f00119de3c2294,
title = "Self-supervised Siamese Autoencoders",
abstract = "In contrast to fully-supervised models, self-supervised representation learning only needs a fraction of data to be labeled and often achieves the same or even higher downstream performance. The goal is to pre-train deep neural networks on a self-supervised task, making them able to extract meaningful features from raw input data afterwards. Previously, autoencoders and Siamese networks have been successfully employed as feature extractors for tasks such as image classification. However, both have their individual shortcomings and benefits. In this paper, we combine their complementary strengths by proposing a new method called SidAE (Siamese denoising autoencoder). Using an image classification downstream task, we show that our model outperforms two self-supervised baselines across multiple data sets and scenarios. Crucially, this includes conditions in which only a small amount of labeled data is available. Empirically, the Siamese component has more impact, but the denoising autoencoder is nevertheless necessary to improve performance.",
keywords = "denoising autoencoder, image classification, pre-training, representation learning, Self-supervised learning, Siamese networks, Informatics, Business informatics",
author = "Friederike Baier and Sebastian Mair and Fadel, {Samuel G.}",
note = "Publisher Copyright: {\textcopyright} The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.; 22nd International Symposium on Intelligent Data Analysis - IDA 2024, IDA 2024 ; Conference date: 24-04-2024 Through 26-04-2024",
year = "2024",
doi = "10.1007/978-3-031-58547-0_10",
language = "English",
isbn = "978-3-031-58546-3",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Science and Business Media Deutschland GmbH",
pages = "117--128",
editor = "Ioanna Miliou and Panagiotis Papapetrou and Nico Piatkowski",
booktitle = "Advances in Intelligent Data Analysis XXII",
address = "Germany",
url = "http://www.wikicfp.com/cfp/servlet/event.showcfp?copyownerid=90704&eventid=176233, http://ida2024.org/",

}

RIS

TY - CHAP

T1 - Self-supervised Siamese Autoencoders

AU - Baier, Friederike

AU - Mair, Sebastian

AU - Fadel, Samuel G.

N1 - Conference code: 22

PY - 2024

Y1 - 2024

N2 - In contrast to fully-supervised models, self-supervised representation learning only needs a fraction of data to be labeled and often achieves the same or even higher downstream performance. The goal is to pre-train deep neural networks on a self-supervised task, making them able to extract meaningful features from raw input data afterwards. Previously, autoencoders and Siamese networks have been successfully employed as feature extractors for tasks such as image classification. However, both have their individual shortcomings and benefits. In this paper, we combine their complementary strengths by proposing a new method called SidAE (Siamese denoising autoencoder). Using an image classification downstream task, we show that our model outperforms two self-supervised baselines across multiple data sets and scenarios. Crucially, this includes conditions in which only a small amount of labeled data is available. Empirically, the Siamese component has more impact, but the denoising autoencoder is nevertheless necessary to improve performance.

AB - In contrast to fully-supervised models, self-supervised representation learning only needs a fraction of data to be labeled and often achieves the same or even higher downstream performance. The goal is to pre-train deep neural networks on a self-supervised task, making them able to extract meaningful features from raw input data afterwards. Previously, autoencoders and Siamese networks have been successfully employed as feature extractors for tasks such as image classification. However, both have their individual shortcomings and benefits. In this paper, we combine their complementary strengths by proposing a new method called SidAE (Siamese denoising autoencoder). Using an image classification downstream task, we show that our model outperforms two self-supervised baselines across multiple data sets and scenarios. Crucially, this includes conditions in which only a small amount of labeled data is available. Empirically, the Siamese component has more impact, but the denoising autoencoder is nevertheless necessary to improve performance.

KW - denoising autoencoder

KW - image classification

KW - pre-training

KW - representation learning

KW - Self-supervised learning

KW - Siamese networks

KW - Informatics

KW - Business informatics

UR - http://www.scopus.com/inward/record.url?scp=85192241043&partnerID=8YFLogxK

UR - https://www.mendeley.com/catalogue/99889785-310e-31ca-8786-f93a9453f8b6/

U2 - 10.1007/978-3-031-58547-0_10

DO - 10.1007/978-3-031-58547-0_10

M3 - Article in conference proceedings

AN - SCOPUS:85192241043

SN - 978-3-031-58546-3

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 117

EP - 128

BT - Advances in Intelligent Data Analysis XXII

A2 - Miliou, Ioanna

A2 - Papapetrou, Panagiotis

A2 - Piatkowski, Nico

PB - Springer Science and Business Media Deutschland GmbH

T2 - 22nd International Symposium on Intelligent Data Analysis - IDA 2024

Y2 - 24 April 2024 through 26 April 2024

ER -

DOI