Efficient co-regularised least squares regression
Research output: Contributions to collected editions/works › Article in conference proceedings › Research › peer-review
Standard
Proceedings of the 23rd international conference on Machine learning. ed. / William Cohen. Association for Computing Machinery, Inc, 2006. p. 137-144 (ACM International Conference Proceeding Series; Vol. 148).
Research output: Contributions to collected editions/works › Article in conference proceedings › Research › peer-review
Harvard
APA
Vancouver
Bibtex
}
RIS
TY - CHAP
T1 - Efficient co-regularised least squares regression
AU - Brefeld, Ulf
AU - Gärtner, Thomas
AU - Scheffer, Tobias
AU - Wrobel, Stefan
N1 - Conference code: 23
PY - 2006/1/1
Y1 - 2006/1/1
N2 - In many applications, unlabelled examples are inexpensive and easy to obtain. Semi-supervised approaches try to utilise such examples to reduce the predictive error. In this paper, we investigate a semi-supervised least squares regression algorithm based on the co-learning approach. Similar to other semi-supervised algorithms, our base algorithm has cubic runtime complexity in the number of unlabelled examples. To be able to handle larger sets of unlabelled examples, we devise a semi-parametric variant that scales linearly in the number of unlabelled examples. Ex-periments show a significant error reduction by co-regularisation and a large runtime improvement for the semi-parametric approximation. Last but not least, we propose a distributed procedure that can be applied without collecting all data at a single site.
AB - In many applications, unlabelled examples are inexpensive and easy to obtain. Semi-supervised approaches try to utilise such examples to reduce the predictive error. In this paper, we investigate a semi-supervised least squares regression algorithm based on the co-learning approach. Similar to other semi-supervised algorithms, our base algorithm has cubic runtime complexity in the number of unlabelled examples. To be able to handle larger sets of unlabelled examples, we devise a semi-parametric variant that scales linearly in the number of unlabelled examples. Ex-periments show a significant error reduction by co-regularisation and a large runtime improvement for the semi-parametric approximation. Last but not least, we propose a distributed procedure that can be applied without collecting all data at a single site.
KW - Informatics
KW - Business informatics
UR - http://www.scopus.com/inward/record.url?scp=34250767770&partnerID=8YFLogxK
UR - https://www.mendeley.com/catalogue/48577d33-c292-3153-b395-46acf9d90df8/
U2 - 10.1145/1143844.1143862
DO - 10.1145/1143844.1143862
M3 - Article in conference proceedings
AN - SCOPUS:34250767770
SN - 978-159593383-6
SN - 1595933832
T3 - ACM International Conference Proceeding Series
SP - 137
EP - 144
BT - Proceedings of the 23rd international conference on Machine learning
A2 - Cohen, William
PB - Association for Computing Machinery, Inc
T2 - ICML '06
Y2 - 25 June 2006 through 29 June 2006
ER -