Learning shortest paths in word graphs

Research output: Contributions to collected editions/worksArticle in conference proceedingsResearchpeer-review

Standard

Learning shortest paths in word graphs. / Tzouridis, Emmanouil ; Brefeld, Ulf.

Knowledge Discovery, Data Mining and Machi- ne Learning (KDML-2013). ed. / Andreas Henrich; Hans-Christian Sperker. Bamberg : Lehrstuhl für Medieninformatik - Universität Bamberg, 2014. p. 113-116.

Research output: Contributions to collected editions/worksArticle in conference proceedingsResearchpeer-review

Harvard

Tzouridis, E & Brefeld, U 2014, Learning shortest paths in word graphs. in A Henrich & H-C Sperker (eds), Knowledge Discovery, Data Mining and Machi- ne Learning (KDML-2013). Lehrstuhl für Medieninformatik - Universität Bamberg, Bamberg, pp. 113-116, Lernen, Wissen und Adaptivität - LWA 2013, Bamberg, Germany, 07.10.13. <http://www.minf.uni-bamberg.de/lwa2013/proceedings/proceedings_lwa1013.pdf>

APA

Tzouridis, E., & Brefeld, U. (2014). Learning shortest paths in word graphs. In A. Henrich, & H-C. Sperker (Eds.), Knowledge Discovery, Data Mining and Machi- ne Learning (KDML-2013) (pp. 113-116). Lehrstuhl für Medieninformatik - Universität Bamberg. http://www.minf.uni-bamberg.de/lwa2013/proceedings/proceedings_lwa1013.pdf

Vancouver

Tzouridis E, Brefeld U. Learning shortest paths in word graphs. In Henrich A, Sperker H-C, editors, Knowledge Discovery, Data Mining and Machi- ne Learning (KDML-2013). Bamberg: Lehrstuhl für Medieninformatik - Universität Bamberg. 2014. p. 113-116

Bibtex

@inbook{2330d69cb181434f90b9992748d32310,
title = "Learning shortest paths in word graphs",
abstract = "In this paper we briefly sketch our work on text summarisation using compression graphs. The task is described as follows: Given a set of related sentences describing the same event, we aim at generating a single sentence that is simply structured, easily understandable, and minimal in terms of the number of words/tokens. Traditionally, sentence compression deals with finding the shortest path in word graphs in an unsupervised setting. The major drawback of this approach is the use of manually crafted heuristics for edge weights. By contrast, we cast sentence compression as a structured prediction problem. Edges of the compression graph are represented by features drawn from adjacent nodes so that corresponding weights are learned by a generalised linear model. Decoding is performed in polynomial time by a generalised shortest path algorithm using loss augmented inference. We report on preliminary results on artificial and real world data. {\textcopyright} LWA 2013 - Lernen, Wissen and Adaptivitat, Workshop Proceedings. All rights reserved",
keywords = "Informatics, Business informatics",
author = "Emmanouil Tzouridis and Ulf Brefeld",
year = "2014",
language = "English",
pages = "113--116",
editor = "Andreas Henrich and Hans-Christian Sperker",
booktitle = "Knowledge Discovery, Data Mining and Machi- ne Learning (KDML-2013)",
publisher = "Lehrstuhl f{\"u}r Medieninformatik - Universit{\"a}t Bamberg",
address = "Germany",
note = "null ; Conference date: 07-10-2013 Through 09-10-2013",
url = "http://www.minf.uni-bamberg.de/lwa2013/",

}

RIS

TY - CHAP

T1 - Learning shortest paths in word graphs

AU - Tzouridis, Emmanouil

AU - Brefeld, Ulf

PY - 2014

Y1 - 2014

N2 - In this paper we briefly sketch our work on text summarisation using compression graphs. The task is described as follows: Given a set of related sentences describing the same event, we aim at generating a single sentence that is simply structured, easily understandable, and minimal in terms of the number of words/tokens. Traditionally, sentence compression deals with finding the shortest path in word graphs in an unsupervised setting. The major drawback of this approach is the use of manually crafted heuristics for edge weights. By contrast, we cast sentence compression as a structured prediction problem. Edges of the compression graph are represented by features drawn from adjacent nodes so that corresponding weights are learned by a generalised linear model. Decoding is performed in polynomial time by a generalised shortest path algorithm using loss augmented inference. We report on preliminary results on artificial and real world data. © LWA 2013 - Lernen, Wissen and Adaptivitat, Workshop Proceedings. All rights reserved

AB - In this paper we briefly sketch our work on text summarisation using compression graphs. The task is described as follows: Given a set of related sentences describing the same event, we aim at generating a single sentence that is simply structured, easily understandable, and minimal in terms of the number of words/tokens. Traditionally, sentence compression deals with finding the shortest path in word graphs in an unsupervised setting. The major drawback of this approach is the use of manually crafted heuristics for edge weights. By contrast, we cast sentence compression as a structured prediction problem. Edges of the compression graph are represented by features drawn from adjacent nodes so that corresponding weights are learned by a generalised linear model. Decoding is performed in polynomial time by a generalised shortest path algorithm using loss augmented inference. We report on preliminary results on artificial and real world data. © LWA 2013 - Lernen, Wissen and Adaptivitat, Workshop Proceedings. All rights reserved

KW - Informatics

KW - Business informatics

M3 - Article in conference proceedings

SP - 113

EP - 116

BT - Knowledge Discovery, Data Mining and Machi- ne Learning (KDML-2013)

A2 - Henrich, Andreas

A2 - Sperker, Hans-Christian

PB - Lehrstuhl für Medieninformatik - Universität Bamberg

CY - Bamberg

Y2 - 7 October 2013 through 9 October 2013

ER -