Graph Conditional Variational Models: Too Complex for Multiagent Trajectories?

Research output: Journal contributionsConference article in journalResearchpeer-review

Standard

Graph Conditional Variational Models: Too Complex for Multiagent Trajectories? / Rudolph, Yannick; Brefeld, Ulf; Dick, Uwe.

In: Proceedings of Machine Learning Research, Vol. 137, 2020, p. 136-147.

Research output: Journal contributionsConference article in journalResearchpeer-review

Harvard

APA

Vancouver

Bibtex

@article{210f9c82d3b5449c990ce45577c5185a,
title = "Graph Conditional Variational Models: Too Complex for Multiagent Trajectories?",
abstract = "Recent advances in modeling multiagent trajectories combine graph architectures such as graph neural networks (GNNs) with conditional variational models (CVMs) such as variational RNNs (VRNNs). Originally, CVMs have been proposed to facilitate learning with multi-modal and structured data and thus seem to perfectly match the requirements of multi-modal multiagent trajectories with their structured output spaces. Empirical results of VRNNs on trajectory data support this assumption. In this paper, we revisit experiments and proposed architectures with additional rigour, ablation runs and baselines. In contrast to common belief, we show that prior results with CVMs on trajectory data might be misleading. Given a neural network with a graph architecture and/or structured output function, variational autoencoding does not seem to contribute statistically significantly to empirical performance. Instead, we show that well-known emission functions do contribute, while coming with less complexity, engineering and computation time. ",
keywords = "Informatics, Business informatics",
author = "Yannick Rudolph and Ulf Brefeld and Uwe Dick",
note = "Proceedings on {"}I Can't Believe It's Not Better!{"} at NeurIPS Workshops, 2020. ; 34rd Conference on Neural Information Processing Systems - NeurIPS 2020 : Neural Information Processing Systems Online Conference 2020 , NeurIPS 2020 ; Conference date: 06-12-2020 Through 12-12-2020",
year = "2020",
language = "English",
volume = "137",
pages = "136--147",
journal = "Proceedings of Machine Learning Research",
issn = "2640-3498",
publisher = "MLResearch Press",
url = "https://neurips.cc/virtual/2020/public/index.html, https://proceedings.mlr.press/v137/",

}

RIS

TY - JOUR

T1 - Graph Conditional Variational Models: Too Complex for Multiagent Trajectories?

AU - Rudolph, Yannick

AU - Brefeld, Ulf

AU - Dick, Uwe

N1 - Conference code: 34

PY - 2020

Y1 - 2020

N2 - Recent advances in modeling multiagent trajectories combine graph architectures such as graph neural networks (GNNs) with conditional variational models (CVMs) such as variational RNNs (VRNNs). Originally, CVMs have been proposed to facilitate learning with multi-modal and structured data and thus seem to perfectly match the requirements of multi-modal multiagent trajectories with their structured output spaces. Empirical results of VRNNs on trajectory data support this assumption. In this paper, we revisit experiments and proposed architectures with additional rigour, ablation runs and baselines. In contrast to common belief, we show that prior results with CVMs on trajectory data might be misleading. Given a neural network with a graph architecture and/or structured output function, variational autoencoding does not seem to contribute statistically significantly to empirical performance. Instead, we show that well-known emission functions do contribute, while coming with less complexity, engineering and computation time.

AB - Recent advances in modeling multiagent trajectories combine graph architectures such as graph neural networks (GNNs) with conditional variational models (CVMs) such as variational RNNs (VRNNs). Originally, CVMs have been proposed to facilitate learning with multi-modal and structured data and thus seem to perfectly match the requirements of multi-modal multiagent trajectories with their structured output spaces. Empirical results of VRNNs on trajectory data support this assumption. In this paper, we revisit experiments and proposed architectures with additional rigour, ablation runs and baselines. In contrast to common belief, we show that prior results with CVMs on trajectory data might be misleading. Given a neural network with a graph architecture and/or structured output function, variational autoencoding does not seem to contribute statistically significantly to empirical performance. Instead, we show that well-known emission functions do contribute, while coming with less complexity, engineering and computation time.

KW - Informatics

KW - Business informatics

M3 - Conference article in journal

VL - 137

SP - 136

EP - 147

JO - Proceedings of Machine Learning Research

JF - Proceedings of Machine Learning Research

SN - 2640-3498

T2 - 34rd Conference on Neural Information Processing Systems - NeurIPS 2020

Y2 - 6 December 2020 through 12 December 2020

ER -