Insights from classifying visual concepts with multiple kernel learning
Research output: Journal contributions › Journal articles › Research › peer-review
Standard
In: PLoS ONE, Vol. 7, No. 8, e38897, 24.08.2012.
Research output: Journal contributions › Journal articles › Research › peer-review
Harvard
APA
Vancouver
Bibtex
}
RIS
TY - JOUR
T1 - Insights from classifying visual concepts with multiple kernel learning
AU - Binder, Alexander
AU - Nakajima, Shinichi
AU - Kloft, Marius
AU - Müller, Christina
AU - Samek, Wojciech
AU - Brefeld, Ulf
AU - Müller, Klaus Robert
AU - Kawanabe, Motoaki
N1 - PF7 Funding number 216886
PY - 2012/8/24
Y1 - 2012/8/24
N2 - Combining information from various image features has become a standard technique in concept recognition tasks. However, the optimal way of fusing the resulting kernel functions is usually unknown in practical applications. Multiple kernel learning (MKL) techniques allow to determine an optimal linear combination of such similarity matrices. Classical approaches to MKL promote sparse mixtures. Unfortunately, 1-norm regularized MKL variants are often observed to be outperformed by an unweighted sum kernel. The main contributions of this paper are the following: we apply a recently developed non-sparse MKL variant to state-of-the-art concept recognition tasks from the application domain of computer vision. We provide insights on benefits and limits of non-sparse MKL and compare it against its direct competitors, the sum-kernel SVM and sparse MKL. We report empirical results for the PASCAL VOC 2009 Classification and ImageCLEF2010 Photo Annotation challenge data sets. Data sets (kernel matrices) as well as further information are available at http://doc.ml.tuberlin.de/image_mkl/(Accessed 2012 Jun 25).
AB - Combining information from various image features has become a standard technique in concept recognition tasks. However, the optimal way of fusing the resulting kernel functions is usually unknown in practical applications. Multiple kernel learning (MKL) techniques allow to determine an optimal linear combination of such similarity matrices. Classical approaches to MKL promote sparse mixtures. Unfortunately, 1-norm regularized MKL variants are often observed to be outperformed by an unweighted sum kernel. The main contributions of this paper are the following: we apply a recently developed non-sparse MKL variant to state-of-the-art concept recognition tasks from the application domain of computer vision. We provide insights on benefits and limits of non-sparse MKL and compare it against its direct competitors, the sum-kernel SVM and sparse MKL. We report empirical results for the PASCAL VOC 2009 Classification and ImageCLEF2010 Photo Annotation challenge data sets. Data sets (kernel matrices) as well as further information are available at http://doc.ml.tuberlin.de/image_mkl/(Accessed 2012 Jun 25).
KW - Informatics
KW - concept formation
KW - controlled study
KW - histogram
KW - image display
KW - intermethod comparison
KW - kernel method
KW - machine learning
KW - scoring system
KW - support vector machine
KW - task performance
KW - validity
KW - Business informatics
UR - http://www.scopus.com/inward/record.url?scp=84865281106&partnerID=8YFLogxK
UR - https://www.mendeley.com/catalogue/bcdfd7f6-4986-310b-b4f3-bbc1bfc4ce2e/
U2 - 10.1371/journal.pone.0038897
DO - 10.1371/journal.pone.0038897
M3 - Journal articles
C2 - 22936970
AN - SCOPUS:84865281106
VL - 7
JO - PLoS ONE
JF - PLoS ONE
SN - 1932-6203
IS - 8
M1 - e38897
ER -