Learning shortest paths in word graphs
Publikation: Beiträge in Sammelwerken › Aufsätze in Konferenzbänden › Forschung › begutachtet
Standard
Knowledge Discovery, Data Mining and Machi- ne Learning (KDML-2013). Hrsg. / Andreas Henrich; Hans-Christian Sperker. Bamberg: Lehrstuhl für Medieninformatik - Universität Bamberg, 2014. S. 113-116.
Publikation: Beiträge in Sammelwerken › Aufsätze in Konferenzbänden › Forschung › begutachtet
Harvard
APA
Vancouver
Bibtex
}
RIS
TY - CHAP
T1 - Learning shortest paths in word graphs
AU - Tzouridis, Emmanouil
AU - Brefeld, Ulf
PY - 2014
Y1 - 2014
N2 - In this paper we briefly sketch our work on text summarisation using compression graphs. The task is described as follows: Given a set of related sentences describing the same event, we aim at generating a single sentence that is simply structured, easily understandable, and minimal in terms of the number of words/tokens. Traditionally, sentence compression deals with finding the shortest path in word graphs in an unsupervised setting. The major drawback of this approach is the use of manually crafted heuristics for edge weights. By contrast, we cast sentence compression as a structured prediction problem. Edges of the compression graph are represented by features drawn from adjacent nodes so that corresponding weights are learned by a generalised linear model. Decoding is performed in polynomial time by a generalised shortest path algorithm using loss augmented inference. We report on preliminary results on artificial and real world data. © LWA 2013 - Lernen, Wissen and Adaptivitat, Workshop Proceedings. All rights reserved
AB - In this paper we briefly sketch our work on text summarisation using compression graphs. The task is described as follows: Given a set of related sentences describing the same event, we aim at generating a single sentence that is simply structured, easily understandable, and minimal in terms of the number of words/tokens. Traditionally, sentence compression deals with finding the shortest path in word graphs in an unsupervised setting. The major drawback of this approach is the use of manually crafted heuristics for edge weights. By contrast, we cast sentence compression as a structured prediction problem. Edges of the compression graph are represented by features drawn from adjacent nodes so that corresponding weights are learned by a generalised linear model. Decoding is performed in polynomial time by a generalised shortest path algorithm using loss augmented inference. We report on preliminary results on artificial and real world data. © LWA 2013 - Lernen, Wissen and Adaptivitat, Workshop Proceedings. All rights reserved
KW - Informatics
KW - Business informatics
M3 - Article in conference proceedings
SP - 113
EP - 116
BT - Knowledge Discovery, Data Mining and Machi- ne Learning (KDML-2013)
A2 - Henrich, Andreas
A2 - Sperker, Hans-Christian
PB - Lehrstuhl für Medieninformatik - Universität Bamberg
CY - Bamberg
T2 - Lernen, Wissen und Adaptivität - LWA 2013
Y2 - 7 October 2013 through 9 October 2013
ER -