Efficient co-regularised least squares regression

Publikation: Beiträge in SammelwerkenAufsätze in KonferenzbändenForschungbegutachtet

Standard

Efficient co-regularised least squares regression. / Brefeld, Ulf; Gärtner, Thomas; Scheffer, Tobias et al.
Proceedings of the 23rd international conference on Machine learning. Hrsg. / William Cohen. Association for Computing Machinery, Inc, 2006. S. 137-144 (ACM International Conference Proceeding Series; Band 148).

Publikation: Beiträge in SammelwerkenAufsätze in KonferenzbändenForschungbegutachtet

Harvard

Brefeld, U, Gärtner, T, Scheffer, T & Wrobel, S 2006, Efficient co-regularised least squares regression. in W Cohen (Hrsg.), Proceedings of the 23rd international conference on Machine learning. ACM International Conference Proceeding Series, Bd. 148, Association for Computing Machinery, Inc, S. 137-144, International Conference on Machine Learning - ICML 2006, Pittsburgh, USA / Vereinigte Staaten, 25.06.06. https://doi.org/10.1145/1143844.1143862

APA

Brefeld, U., Gärtner, T., Scheffer, T., & Wrobel, S. (2006). Efficient co-regularised least squares regression. In W. Cohen (Hrsg.), Proceedings of the 23rd international conference on Machine learning (S. 137-144). (ACM International Conference Proceeding Series; Band 148). Association for Computing Machinery, Inc. https://doi.org/10.1145/1143844.1143862

Vancouver

Brefeld U, Gärtner T, Scheffer T, Wrobel S. Efficient co-regularised least squares regression. in Cohen W, Hrsg., Proceedings of the 23rd international conference on Machine learning. Association for Computing Machinery, Inc. 2006. S. 137-144. (ACM International Conference Proceeding Series). doi: 10.1145/1143844.1143862

Bibtex

@inbook{d224cc66882a4795951b471f850457f5,
title = "Efficient co-regularised least squares regression",
abstract = "In many applications, unlabelled examples are inexpensive and easy to obtain. Semi-supervised approaches try to utilise such examples to reduce the predictive error. In this paper, we investigate a semi-supervised least squares regression algorithm based on the co-learning approach. Similar to other semi-supervised algorithms, our base algorithm has cubic runtime complexity in the number of unlabelled examples. To be able to handle larger sets of unlabelled examples, we devise a semi-parametric variant that scales linearly in the number of unlabelled examples. Ex-periments show a significant error reduction by co-regularisation and a large runtime improvement for the semi-parametric approximation. Last but not least, we propose a distributed procedure that can be applied without collecting all data at a single site.",
keywords = "Informatics, Business informatics",
author = "Ulf Brefeld and Thomas G{\"a}rtner and Tobias Scheffer and Stefan Wrobel",
year = "2006",
month = jan,
day = "1",
doi = "10.1145/1143844.1143862",
language = "English",
isbn = "978-159593383-6",
series = "ACM International Conference Proceeding Series",
publisher = "Association for Computing Machinery, Inc",
pages = "137--144",
editor = "William Cohen",
booktitle = "Proceedings of the 23rd international conference on Machine learning",
address = "United States",
note = "ICML '06 ; Conference date: 25-06-2006 Through 29-06-2006",

}

RIS

TY - CHAP

T1 - Efficient co-regularised least squares regression

AU - Brefeld, Ulf

AU - Gärtner, Thomas

AU - Scheffer, Tobias

AU - Wrobel, Stefan

N1 - Conference code: 23

PY - 2006/1/1

Y1 - 2006/1/1

N2 - In many applications, unlabelled examples are inexpensive and easy to obtain. Semi-supervised approaches try to utilise such examples to reduce the predictive error. In this paper, we investigate a semi-supervised least squares regression algorithm based on the co-learning approach. Similar to other semi-supervised algorithms, our base algorithm has cubic runtime complexity in the number of unlabelled examples. To be able to handle larger sets of unlabelled examples, we devise a semi-parametric variant that scales linearly in the number of unlabelled examples. Ex-periments show a significant error reduction by co-regularisation and a large runtime improvement for the semi-parametric approximation. Last but not least, we propose a distributed procedure that can be applied without collecting all data at a single site.

AB - In many applications, unlabelled examples are inexpensive and easy to obtain. Semi-supervised approaches try to utilise such examples to reduce the predictive error. In this paper, we investigate a semi-supervised least squares regression algorithm based on the co-learning approach. Similar to other semi-supervised algorithms, our base algorithm has cubic runtime complexity in the number of unlabelled examples. To be able to handle larger sets of unlabelled examples, we devise a semi-parametric variant that scales linearly in the number of unlabelled examples. Ex-periments show a significant error reduction by co-regularisation and a large runtime improvement for the semi-parametric approximation. Last but not least, we propose a distributed procedure that can be applied without collecting all data at a single site.

KW - Informatics

KW - Business informatics

UR - http://www.scopus.com/inward/record.url?scp=34250767770&partnerID=8YFLogxK

UR - https://www.mendeley.com/catalogue/48577d33-c292-3153-b395-46acf9d90df8/

U2 - 10.1145/1143844.1143862

DO - 10.1145/1143844.1143862

M3 - Article in conference proceedings

AN - SCOPUS:34250767770

SN - 978-159593383-6

SN - 1595933832

T3 - ACM International Conference Proceeding Series

SP - 137

EP - 144

BT - Proceedings of the 23rd international conference on Machine learning

A2 - Cohen, William

PB - Association for Computing Machinery, Inc

T2 - ICML '06

Y2 - 25 June 2006 through 29 June 2006

ER -

DOI

Zuletzt angesehen

Publikationen

  1. Process limits of extrusion of multimaterial components
  2. Unusual two‐bond 13C, 13C coupling constants in sulphones
  3. Design and control of an electromagnetic valve actuator
  4. Verfahren und Einrichtung zum Regeln einer Regelgröße
  5. A toolkit for robust risk assessment using F-divergences
  6. Parameterized Synthetic Image Data Set for Fisheye Lens
  7. Risk preferences under heterogeneous environmental risk
  8. Accurate welding line prediction in extrusion processes
  9. Risk preferences under heterogeneous environmental risk
  10. Assessing Trust by Disclosure in Online Social Networks
  11. Die Bewertung des Informationssystems einer Unternehmung
  12. Image compression based on periodic principal components
  13. Representation of Integration Profiles Using an Ontology
  14. Intra-firm Wage Compression and Cost Coverage of Training
  15. Intermetallic phase characteristics in the Mg–Nd–Zn system
  16. Markups and Concentration in the Context of Digitization
  17. The reputation costs of executive misconduct accusations
  18. Linear free vibrations with uncertain initial conditions
  19. Efficiency of rational learning with private information
  20. The reputation costs of executive misconduct accusations
  21. Imperfect information and consumer inflation expectations
  22. Neural Combinatorial Optimization on Heterogeneous Graphs
  23. BRANGE EFFECTS IN HEDONIC EVALUATION OF OLFACTORY STIMULI
  24. Emotional foundations of the public climate change divide
  25. Financing behavior in new ventures - Evidence from Germany
  26. Planning and control of logistics for offshore wind farms
  27. On the Thermoregulation in the human microvascular system
  28. Handlungsregulation bei der Steuerung chaotischer Systeme
  29. The role of the situation model in mathematical modelling
  30. Dynamische und zukunftsorientierte Bestandsdimensionierung
  31. Cross-document coreference resolution using latent features
  32. Friction riveting of 3D printed polyamide 6 with AA 6056-T6
  33. Können plakatbezogene Nudges zum Treppensteigen animieren?