Efficient and accurate ℓ p-norm multiple kernel learning
Publikation: Beiträge in Sammelwerken › Aufsätze in Konferenzbänden › Forschung › begutachtet
Standard
Advances in Neural Information Processing Systems 22 : Proceedings of the 23rd Annual Conference on Neural Information Processing Systems 2009. Hrsg. / Yoshua Bengio; Dale Schuurmans; John Lafferty; Chris Williams; Aron Culotta. Neural Information Processing Systems, 2009. S. 997-1005 (Advances in Neural Information Processing Systems; Band 22).
Publikation: Beiträge in Sammelwerken › Aufsätze in Konferenzbänden › Forschung › begutachtet
Harvard
APA
Vancouver
Bibtex
}
RIS
TY - CHAP
T1 - Efficient and accurate ℓ p-norm multiple kernel learning
AU - Kloft, Marius
AU - Brefeld, Ulf
AU - Sonnenburg, Soren
AU - Laskov, Pavel
AU - Müller, Klaus Robert
AU - Zien, Alexander
N1 - Conference code: 23
PY - 2009
Y1 - 2009
N2 - Learning linear combinations of multiple kernels is an appealing strategy when the right choice of features is unknown. Previous approaches to multiple kernel learning (MKL) promote sparse kernel combinations to support interpretability. Unfortunately, ℓ 1-norm MKL is hardly observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtures, we generalize MKL to arbitrary ℓ p-norms. We devise new insights on the connection between several existing MKL formulations and develop two efficient interleaved optimization strategies for arbitrary p > 1. Empirically, we demonstrate that the interleaved optimization strategies are much faster compared to the traditionally used wrapper approaches. Finally, we apply ℓp-norm MKL to real-world problems from computational biology, showing that non-sparse MKL achieves accuracies that go beyond the state-of-the-art.
AB - Learning linear combinations of multiple kernels is an appealing strategy when the right choice of features is unknown. Previous approaches to multiple kernel learning (MKL) promote sparse kernel combinations to support interpretability. Unfortunately, ℓ 1-norm MKL is hardly observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtures, we generalize MKL to arbitrary ℓ p-norms. We devise new insights on the connection between several existing MKL formulations and develop two efficient interleaved optimization strategies for arbitrary p > 1. Empirically, we demonstrate that the interleaved optimization strategies are much faster compared to the traditionally used wrapper approaches. Finally, we apply ℓp-norm MKL to real-world problems from computational biology, showing that non-sparse MKL achieves accuracies that go beyond the state-of-the-art.
KW - Business informatics
UR - http://www.scopus.com/inward/record.url?scp=84858738634&partnerID=8YFLogxK
M3 - Article in conference proceedings
AN - SCOPUS:84858738634
SN - 978-161567911-9
T3 - Advances in Neural Information Processing Systems
SP - 997
EP - 1005
BT - Advances in Neural Information Processing Systems 22
A2 - Bengio, Yoshua
A2 - Schuurmans, Dale
A2 - Lafferty, John
A2 - Williams, Chris
A2 - Culotta, Aron
PB - Neural Information Processing Systems
T2 - 23rd Annual Conference on Neural Information Processing Systems, NIPS 2009
Y2 - 7 December 2009 through 10 December 2009
ER -