Co-EM Support Vector learning

Research output: Contributions to collected editions/worksArticle in conference proceedingsResearchpeer-review

Standard

Co-EM Support Vector learning. / Brefeld, Ulf; Scheffer, Tobias.
Proceeding ICML '04 Proceedings of the twenty-first international conference on Machine learning. New York: Association for Computing Machinery, Inc, 2004. p. 121-128 (Proceedings, Twenty-First International Conference on Machine Learning, ICML 2004).

Research output: Contributions to collected editions/worksArticle in conference proceedingsResearchpeer-review

Harvard

Brefeld, U & Scheffer, T 2004, Co-EM Support Vector learning. in Proceeding ICML '04 Proceedings of the twenty-first international conference on Machine learning. Proceedings, Twenty-First International Conference on Machine Learning, ICML 2004, Association for Computing Machinery, Inc, New York, pp. 121-128, 21st International Conference on Machine Learning - 2004, Banff, Canada, 31.12.04. https://doi.org/10.1145/1015330.1015350

APA

Brefeld, U., & Scheffer, T. (2004). Co-EM Support Vector learning. In Proceeding ICML '04 Proceedings of the twenty-first international conference on Machine learning (pp. 121-128). (Proceedings, Twenty-First International Conference on Machine Learning, ICML 2004). Association for Computing Machinery, Inc. https://doi.org/10.1145/1015330.1015350

Vancouver

Brefeld U, Scheffer T. Co-EM Support Vector learning. In Proceeding ICML '04 Proceedings of the twenty-first international conference on Machine learning. New York: Association for Computing Machinery, Inc. 2004. p. 121-128. (Proceedings, Twenty-First International Conference on Machine Learning, ICML 2004). doi: 10.1145/1015330.1015350

Bibtex

@inbook{a4397ba89d664a08a6817898eede574b,
title = "Co-EM Support Vector learning",
abstract = "Multi-view algorithms, such as co-training and co-EM, utilize unlabeled data when the available attributes can be split into independent and compatible subsets. Co-EM outperforms co-training for many problems, but it requires the underlying learner to estimate class probabilities, and to learn from probabilistically labeled data. Therefore, co-EM has so far only been studied with naive Bayesian learners. We cast linear classifiers into a probabilistic framework and develop a co-EM version of the Support Vector Machine. We conduct experiments on text classification problems and compare the family of semi-supervised support vector algorithms under different conditions, including violations of the assumptions underlying multiview learning. For some problems, such as course web page classification, we observe the most accurate results reported so far.",
keywords = "Informatics, Business informatics",
author = "Ulf Brefeld and Tobias Scheffer",
year = "2004",
doi = "10.1145/1015330.1015350",
language = "English",
isbn = "1-58113-838-5 ",
series = "Proceedings, Twenty-First International Conference on Machine Learning, ICML 2004",
publisher = "Association for Computing Machinery, Inc",
pages = "121--128",
booktitle = "Proceeding ICML '04 Proceedings of the twenty-first international conference on Machine learning",
address = "United States",
note = "21st International Conference on Machine Learning - 2004, ICML 2004 ; Conference date: 31-12-2004",
url = "https://icml.cc/imls/icml.html",

}

RIS

TY - CHAP

T1 - Co-EM Support Vector learning

AU - Brefeld, Ulf

AU - Scheffer, Tobias

N1 - Conference code: 21

PY - 2004

Y1 - 2004

N2 - Multi-view algorithms, such as co-training and co-EM, utilize unlabeled data when the available attributes can be split into independent and compatible subsets. Co-EM outperforms co-training for many problems, but it requires the underlying learner to estimate class probabilities, and to learn from probabilistically labeled data. Therefore, co-EM has so far only been studied with naive Bayesian learners. We cast linear classifiers into a probabilistic framework and develop a co-EM version of the Support Vector Machine. We conduct experiments on text classification problems and compare the family of semi-supervised support vector algorithms under different conditions, including violations of the assumptions underlying multiview learning. For some problems, such as course web page classification, we observe the most accurate results reported so far.

AB - Multi-view algorithms, such as co-training and co-EM, utilize unlabeled data when the available attributes can be split into independent and compatible subsets. Co-EM outperforms co-training for many problems, but it requires the underlying learner to estimate class probabilities, and to learn from probabilistically labeled data. Therefore, co-EM has so far only been studied with naive Bayesian learners. We cast linear classifiers into a probabilistic framework and develop a co-EM version of the Support Vector Machine. We conduct experiments on text classification problems and compare the family of semi-supervised support vector algorithms under different conditions, including violations of the assumptions underlying multiview learning. For some problems, such as course web page classification, we observe the most accurate results reported so far.

KW - Informatics

KW - Business informatics

UR - http://www.scopus.com/inward/record.url?scp=14344251008&partnerID=8YFLogxK

UR - https://www.mendeley.com/catalogue/464ca35f-43df-33f6-acdc-c9ae6911d1a7/

U2 - 10.1145/1015330.1015350

DO - 10.1145/1015330.1015350

M3 - Article in conference proceedings

AN - SCOPUS:14344251008

SN - 1-58113-838-5

SN - 978-1-58113-838-2

T3 - Proceedings, Twenty-First International Conference on Machine Learning, ICML 2004

SP - 121

EP - 128

BT - Proceeding ICML '04 Proceedings of the twenty-first international conference on Machine learning

PB - Association for Computing Machinery, Inc

CY - New York

T2 - 21st International Conference on Machine Learning - 2004

Y2 - 31 December 2004

ER -

DOI