Co-EM Support Vector learning

Research output: Contributions to collected editions/worksArticle in conference proceedingsResearchpeer-review

Standard

Co-EM Support Vector learning. / Brefeld, Ulf; Scheffer, Tobias.
Proceeding ICML '04 Proceedings of the twenty-first international conference on Machine learning. New York: Association for Computing Machinery, Inc, 2004. p. 121-128 (Proceedings, Twenty-First International Conference on Machine Learning, ICML 2004).

Research output: Contributions to collected editions/worksArticle in conference proceedingsResearchpeer-review

Harvard

Brefeld, U & Scheffer, T 2004, Co-EM Support Vector learning. in Proceeding ICML '04 Proceedings of the twenty-first international conference on Machine learning. Proceedings, Twenty-First International Conference on Machine Learning, ICML 2004, Association for Computing Machinery, Inc, New York, pp. 121-128, 21st International Conference on Machine Learning - 2004, Banff, Canada, 31.12.04. https://doi.org/10.1145/1015330.1015350

APA

Brefeld, U., & Scheffer, T. (2004). Co-EM Support Vector learning. In Proceeding ICML '04 Proceedings of the twenty-first international conference on Machine learning (pp. 121-128). (Proceedings, Twenty-First International Conference on Machine Learning, ICML 2004). Association for Computing Machinery, Inc. https://doi.org/10.1145/1015330.1015350

Vancouver

Brefeld U, Scheffer T. Co-EM Support Vector learning. In Proceeding ICML '04 Proceedings of the twenty-first international conference on Machine learning. New York: Association for Computing Machinery, Inc. 2004. p. 121-128. (Proceedings, Twenty-First International Conference on Machine Learning, ICML 2004). doi: 10.1145/1015330.1015350

Bibtex

@inbook{a4397ba89d664a08a6817898eede574b,
title = "Co-EM Support Vector learning",
abstract = "Multi-view algorithms, such as co-training and co-EM, utilize unlabeled data when the available attributes can be split into independent and compatible subsets. Co-EM outperforms co-training for many problems, but it requires the underlying learner to estimate class probabilities, and to learn from probabilistically labeled data. Therefore, co-EM has so far only been studied with naive Bayesian learners. We cast linear classifiers into a probabilistic framework and develop a co-EM version of the Support Vector Machine. We conduct experiments on text classification problems and compare the family of semi-supervised support vector algorithms under different conditions, including violations of the assumptions underlying multiview learning. For some problems, such as course web page classification, we observe the most accurate results reported so far.",
keywords = "Informatics, Business informatics",
author = "Ulf Brefeld and Tobias Scheffer",
year = "2004",
doi = "10.1145/1015330.1015350",
language = "English",
isbn = "1-58113-838-5 ",
series = "Proceedings, Twenty-First International Conference on Machine Learning, ICML 2004",
publisher = "Association for Computing Machinery, Inc",
pages = "121--128",
booktitle = "Proceeding ICML '04 Proceedings of the twenty-first international conference on Machine learning",
address = "United States",
note = "21st International Conference on Machine Learning - 2004, ICML 2004 ; Conference date: 31-12-2004",
url = "https://icml.cc/imls/icml.html",

}

RIS

TY - CHAP

T1 - Co-EM Support Vector learning

AU - Brefeld, Ulf

AU - Scheffer, Tobias

N1 - Conference code: 21

PY - 2004

Y1 - 2004

N2 - Multi-view algorithms, such as co-training and co-EM, utilize unlabeled data when the available attributes can be split into independent and compatible subsets. Co-EM outperforms co-training for many problems, but it requires the underlying learner to estimate class probabilities, and to learn from probabilistically labeled data. Therefore, co-EM has so far only been studied with naive Bayesian learners. We cast linear classifiers into a probabilistic framework and develop a co-EM version of the Support Vector Machine. We conduct experiments on text classification problems and compare the family of semi-supervised support vector algorithms under different conditions, including violations of the assumptions underlying multiview learning. For some problems, such as course web page classification, we observe the most accurate results reported so far.

AB - Multi-view algorithms, such as co-training and co-EM, utilize unlabeled data when the available attributes can be split into independent and compatible subsets. Co-EM outperforms co-training for many problems, but it requires the underlying learner to estimate class probabilities, and to learn from probabilistically labeled data. Therefore, co-EM has so far only been studied with naive Bayesian learners. We cast linear classifiers into a probabilistic framework and develop a co-EM version of the Support Vector Machine. We conduct experiments on text classification problems and compare the family of semi-supervised support vector algorithms under different conditions, including violations of the assumptions underlying multiview learning. For some problems, such as course web page classification, we observe the most accurate results reported so far.

KW - Informatics

KW - Business informatics

UR - http://www.scopus.com/inward/record.url?scp=14344251008&partnerID=8YFLogxK

UR - https://www.mendeley.com/catalogue/464ca35f-43df-33f6-acdc-c9ae6911d1a7/

U2 - 10.1145/1015330.1015350

DO - 10.1145/1015330.1015350

M3 - Article in conference proceedings

AN - SCOPUS:14344251008

SN - 1-58113-838-5

SN - 978-1-58113-838-2

T3 - Proceedings, Twenty-First International Conference on Machine Learning, ICML 2004

SP - 121

EP - 128

BT - Proceeding ICML '04 Proceedings of the twenty-first international conference on Machine learning

PB - Association for Computing Machinery, Inc

CY - New York

T2 - 21st International Conference on Machine Learning - 2004

Y2 - 31 December 2004

ER -

DOI

Recently viewed

Publications

  1. Lagrangian analysis of long-term dynamics of turbulent superstructures
  2. Efficiency of HPV 16 L1/E7 DNA immunization
  3. Not Feeling Good in STEM
  4. Predictive Maintenance of Bearings Through IoT and Cloud-Based Systems
  5. Emotions and multimedia learning
  6. Same Play, Different Actors?
  7. Trusting as a 'Leap of Faith': Trust-Building Practices in Client-Consultant Relationships
  8. Schreibende unterstützen lernen
  9. Treating the nestedness temperature calculator as a "black box" can lead to false conclusions
  10. Politics of Exception
  11. Morphometric differentiation in a specialised snail predatior
  12. Approaching the other
  13. Time use and time budgets
  14. Developments in Qualitative Mindfulness Practice Research
  15. Disciplines and Doubts
  16. Cultural influences on social feedback processing of character traits
  17. An archetype analysis of sustainability innovations in Biosphere Reserves: Insights for assessing transformative potential
  18. Introduction
  19. Bird community responses to the edge between suburbs and reserves
  20. Parameters, concepts and the terminology of outer space law: a review of the essential facilities served by outer space activities and the rules of interpretation for treaty law and soft law guidelines.
  21. Handbook of Philosophy of Management
  22. Taking stock–Three years of addressing societal challenges on community level through action research
  23. A comparative survey of chemistry-driven in silico methods to identify hazardous substances under REACH
  24. Experimental and numerical analysis of material flow in porthole die extrusion
  25. A milp for installation scheduling of offshore wind farms
  26. Effectiveness and Efficiency of Assertive Outreach for Schizophrenia in Germany
  27. Working with Research Integrity—Guidance for Research Performing Organisations
  28. The edge of virtual communities ?
  29. A suite of multiplexed microsatellite loci for the ground beetle Abax parallelepipedus (Piller and Mitterpacher, 1783) (Coleoptera, Carabidae)
  30. Biodiversity and Resilience of Ecosystem Functions