Multi-view discriminative sequential learning
Publikation: Beiträge in Sammelwerken › Aufsätze in Konferenzbänden › Forschung › begutachtet
Standard
Machine Learning: ECML 2005: 16th European Conference on Machine Learning. Springer, 2005. S. 60-71 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Band 3720 LNAI).
Publikation: Beiträge in Sammelwerken › Aufsätze in Konferenzbänden › Forschung › begutachtet
Harvard
APA
Vancouver
Bibtex
}
RIS
TY - CHAP
T1 - Multi-view discriminative sequential learning
AU - Brefeld, Ulf
AU - Büscher, Christoph
AU - Scheffer, Tobias
N1 - Conference code: 16
PY - 2005/1/1
Y1 - 2005/1/1
N2 - Discriminative learning techniques for sequential data have proven to be more effective than generative models for named entity recognition, information extraction, and other tasks of discrimination. However, semi-supervised learning mechanisms that utilize inexpensive unlabeled sequences in addition to few labeled sequences - such as the Baum-Welch algorithm - are available only for generative models. The multi-view approach is based on the principle of maximizing the consensus among multiple independent hypotheses; we develop this principle into a semi-supervised hidden Markov perceptron, and a semi-supervised hidden Markov support vector learning algorithm. Experiments reveal that the resulting procedures utilize unlabeled data effectively and discriminate more accurately than their purely supervised counterparts.
AB - Discriminative learning techniques for sequential data have proven to be more effective than generative models for named entity recognition, information extraction, and other tasks of discrimination. However, semi-supervised learning mechanisms that utilize inexpensive unlabeled sequences in addition to few labeled sequences - such as the Baum-Welch algorithm - are available only for generative models. The multi-view approach is based on the principle of maximizing the consensus among multiple independent hypotheses; we develop this principle into a semi-supervised hidden Markov perceptron, and a semi-supervised hidden Markov support vector learning algorithm. Experiments reveal that the resulting procedures utilize unlabeled data effectively and discriminate more accurately than their purely supervised counterparts.
KW - Informatics
KW - Unlabeled Data
KW - Neural Information Processing System
KW - Name Entity Recognition
KW - Entity Recognition
KW - Word Sense Disambiguation
KW - Business informatics
UR - http://www.scopus.com/inward/record.url?scp=33646415916&partnerID=8YFLogxK
UR - https://www.mendeley.com/catalogue/f5846d33-6389-3b3a-b23a-bdde947af373/
U2 - 10.1007/11564096_11
DO - 10.1007/11564096_11
M3 - Article in conference proceedings
AN - SCOPUS:33646415916
SN - 978-3-540-29243-2
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 60
EP - 71
BT - Machine Learning: ECML 2005
PB - Springer
T2 - 16th European Conference on Machine Learning
Y2 - 3 October 2005 through 7 October 2005
ER -