Graph Conditional Variational Models: Too Complex for Multiagent Trajectories?

Publikation: Beiträge in ZeitschriftenKonferenzaufsätze in FachzeitschriftenForschungbegutachtet

Standard

Graph Conditional Variational Models: Too Complex for Multiagent Trajectories? / Rudolph, Yannick; Brefeld, Ulf; Dick, Uwe.
in: Proceedings of Machine Learning Research, Jahrgang 137, 2020, S. 136-147.

Publikation: Beiträge in ZeitschriftenKonferenzaufsätze in FachzeitschriftenForschungbegutachtet

Harvard

APA

Vancouver

Bibtex

@article{210f9c82d3b5449c990ce45577c5185a,
title = "Graph Conditional Variational Models: Too Complex for Multiagent Trajectories?",
abstract = "Recent advances in modeling multiagent trajectories combine graph architectures such as graph neural networks (GNNs) with conditional variational models (CVMs) such as variational RNNs (VRNNs). Originally, CVMs have been proposed to facilitate learning with multi-modal and structured data and thus seem to perfectly match the requirements of multi-modal multiagent trajectories with their structured output spaces. Empirical results of VRNNs on trajectory data support this assumption. In this paper, we revisit experiments and proposed architectures with additional rigour, ablation runs and baselines. In contrast to common belief, we show that prior results with CVMs on trajectory data might be misleading. Given a neural network with a graph architecture and/or structured output function, variational autoencoding does not seem to contribute statistically significantly to empirical performance. Instead, we show that well-known emission functions do contribute, while coming with less complexity, engineering and computation time. ",
keywords = "Informatics, Business informatics",
author = "Yannick Rudolph and Ulf Brefeld and Uwe Dick",
note = "Publisher Copyright: {\textcopyright} Proceedings of Machine Learning Research 2020.; 34rd Conference on Neural Information Processing Systems - NeurIPS 2020 : Neural Information Processing Systems Online Conference 2020 , NeurIPS 2020 ; Conference date: 06-12-2020 Through 12-12-2020",
year = "2020",
language = "English",
volume = "137",
pages = "136--147",
journal = "Proceedings of Machine Learning Research",
issn = "2640-3498",
publisher = "MLResearch Press",
url = "https://neurips.cc/virtual/2020/public/index.html, https://proceedings.mlr.press/v137/",

}

RIS

TY - JOUR

T1 - Graph Conditional Variational Models: Too Complex for Multiagent Trajectories?

AU - Rudolph, Yannick

AU - Brefeld, Ulf

AU - Dick, Uwe

N1 - Conference code: 34

PY - 2020

Y1 - 2020

N2 - Recent advances in modeling multiagent trajectories combine graph architectures such as graph neural networks (GNNs) with conditional variational models (CVMs) such as variational RNNs (VRNNs). Originally, CVMs have been proposed to facilitate learning with multi-modal and structured data and thus seem to perfectly match the requirements of multi-modal multiagent trajectories with their structured output spaces. Empirical results of VRNNs on trajectory data support this assumption. In this paper, we revisit experiments and proposed architectures with additional rigour, ablation runs and baselines. In contrast to common belief, we show that prior results with CVMs on trajectory data might be misleading. Given a neural network with a graph architecture and/or structured output function, variational autoencoding does not seem to contribute statistically significantly to empirical performance. Instead, we show that well-known emission functions do contribute, while coming with less complexity, engineering and computation time.

AB - Recent advances in modeling multiagent trajectories combine graph architectures such as graph neural networks (GNNs) with conditional variational models (CVMs) such as variational RNNs (VRNNs). Originally, CVMs have been proposed to facilitate learning with multi-modal and structured data and thus seem to perfectly match the requirements of multi-modal multiagent trajectories with their structured output spaces. Empirical results of VRNNs on trajectory data support this assumption. In this paper, we revisit experiments and proposed architectures with additional rigour, ablation runs and baselines. In contrast to common belief, we show that prior results with CVMs on trajectory data might be misleading. Given a neural network with a graph architecture and/or structured output function, variational autoencoding does not seem to contribute statistically significantly to empirical performance. Instead, we show that well-known emission functions do contribute, while coming with less complexity, engineering and computation time.

KW - Informatics

KW - Business informatics

UR - http://www.scopus.com/inward/record.url?scp=85163213279&partnerID=8YFLogxK

M3 - Conference article in journal

VL - 137

SP - 136

EP - 147

JO - Proceedings of Machine Learning Research

JF - Proceedings of Machine Learning Research

SN - 2640-3498

T2 - 34rd Conference on Neural Information Processing Systems - NeurIPS 2020

Y2 - 6 December 2020 through 12 December 2020

ER -

Links

Zuletzt angesehen

Publikationen

  1. Neural Combinatorial Optimization on Heterogeneous Graphs
  2. Constructions and Reconstructions. The Architectural Image between Rendering and Photography
  3. Analyzing different types of moderated method effects in confirmatory factor models for structurally different methods
  4. Using the flatness of DC-Drives to emulate a generator for a decoupled MPC using a geometric approach for motion control in Robotino
  5. Dynamic Lot Size Optimization with Reinforcement Learning
  6. Latent structure perceptron with feature induction for unrestricted coreference resolution
  7. Intersection tests for the cointegrating rank in dependent panel data
  8. Dispatching rule selection with Gaussian processes
  9. Unidimensional and Multidimensional Methods for Recurrence Quantification Analysis with crqa
  10. Optimizing sampling of flying insects using a modified window trap
  11. Finding Similar Movements in Positional Data Streams
  12. Exploration strategies, performance, and error consequences when learning a complex computer task
  13. The Use of Genetic Algorithm for PID Controller Auto-Tuning in ARM CORTEX M4 Platform
  14. Lyapunov stability analysis to set up a PI controller for a mass flow system in case of a non-saturating input
  15. Empowering materials processing and performance from data and AI
  16. Multidimensional Cross-Recurrence Quantification Analysis (MdCRQA)–A Method for Quantifying Correlation between Multivariate Time-Series
  17. Changing the Administration from within:
  18. Using cross-recurrence quantification analysis to compute similarity measures for time series of unequal length with applications to sleep stage analysis
  19. Using Decision Trees and Reinforcement Learning for the Dynamic Adjustment of Composite Sequencing Rules in a Flexible Manufacturing System
  20. On the Functional Controllability Using a Geometric Approach together with a Decoupled MPC for Motion Control in Robotino
  21. On the Power and Performance of a Doubly Latent Residual Approach to Explain Latent Specific Factors in Multilevel-Bifactor-(S-1) Models
  22. The role of learners’ memory in app-based language instruction: the case of Duolingo.
  23. Using learning protocols for knowledge acquisition and problem solving with individual and group incentives
  24. Hierarchical trait filtering at different spatial scales determines beetle assemblages in deadwood
  25. Improving short-term academic performance in the flipped classroom using dynamic geometry software
  26. A model predictive control for an aggregate actuator with a self-tuning initial condition procedure in combustion engines
  27. An extended analytical approach to evaluating monotonic functions of fuzzy numbers
  28. FaST: A linear time stack trace alignment heuristic for crash report deduplication
  29. A computational study of a model of single-crystal strain-gradient viscoplasticity with an interactive hardening relation
  30. Predicting the Difficulty of Exercise Items for Dynamic Difficulty Adaptation in Adaptive Language Tutoring
  31. Lyapunov Convergence Analysis for Asymptotic Tracking Using Forward and Backward Euler Approximation of Discrete Differential Equations
  32. Distinguishing state variability from trait change in longitudinal data
  33. Return of Fibonacci random walks
  34. A Switching Cascade Sliding PID-PID Controllers Combined with a Feedforward and an MPC for an Actuator in Camless Internal Combustion Engines