Graph Conditional Variational Models: Too Complex for Multiagent Trajectories?
Publikation: Beiträge in Zeitschriften › Konferenzaufsätze in Fachzeitschriften › Forschung › begutachtet
Standard
in: Proceedings of Machine Learning Research, Jahrgang 137, 2020, S. 136-147.
Publikation: Beiträge in Zeitschriften › Konferenzaufsätze in Fachzeitschriften › Forschung › begutachtet
Harvard
APA
Vancouver
Bibtex
}
RIS
TY - JOUR
T1 - Graph Conditional Variational Models: Too Complex for Multiagent Trajectories?
AU - Rudolph, Yannick
AU - Brefeld, Ulf
AU - Dick, Uwe
N1 - Conference code: 34
PY - 2020
Y1 - 2020
N2 - Recent advances in modeling multiagent trajectories combine graph architectures such as graph neural networks (GNNs) with conditional variational models (CVMs) such as variational RNNs (VRNNs). Originally, CVMs have been proposed to facilitate learning with multi-modal and structured data and thus seem to perfectly match the requirements of multi-modal multiagent trajectories with their structured output spaces. Empirical results of VRNNs on trajectory data support this assumption. In this paper, we revisit experiments and proposed architectures with additional rigour, ablation runs and baselines. In contrast to common belief, we show that prior results with CVMs on trajectory data might be misleading. Given a neural network with a graph architecture and/or structured output function, variational autoencoding does not seem to contribute statistically significantly to empirical performance. Instead, we show that well-known emission functions do contribute, while coming with less complexity, engineering and computation time.
AB - Recent advances in modeling multiagent trajectories combine graph architectures such as graph neural networks (GNNs) with conditional variational models (CVMs) such as variational RNNs (VRNNs). Originally, CVMs have been proposed to facilitate learning with multi-modal and structured data and thus seem to perfectly match the requirements of multi-modal multiagent trajectories with their structured output spaces. Empirical results of VRNNs on trajectory data support this assumption. In this paper, we revisit experiments and proposed architectures with additional rigour, ablation runs and baselines. In contrast to common belief, we show that prior results with CVMs on trajectory data might be misleading. Given a neural network with a graph architecture and/or structured output function, variational autoencoding does not seem to contribute statistically significantly to empirical performance. Instead, we show that well-known emission functions do contribute, while coming with less complexity, engineering and computation time.
KW - Informatics
KW - Business informatics
UR - http://www.scopus.com/inward/record.url?scp=85163213279&partnerID=8YFLogxK
M3 - Conference article in journal
VL - 137
SP - 136
EP - 147
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
SN - 2640-3498
T2 - 34rd Conference on Neural Information Processing Systems - NeurIPS 2020
Y2 - 6 December 2020 through 12 December 2020
ER -