Graph Conditional Variational Models: Too Complex for Multiagent Trajectories?

Research output: Journal contributionsConference article in journalResearchpeer-review

Standard

Graph Conditional Variational Models: Too Complex for Multiagent Trajectories? / Rudolph, Yannick; Brefeld, Ulf; Dick, Uwe.
In: Proceedings of Machine Learning Research, Vol. 137, 2020, p. 136-147.

Research output: Journal contributionsConference article in journalResearchpeer-review

Harvard

APA

Vancouver

Bibtex

@article{210f9c82d3b5449c990ce45577c5185a,
title = "Graph Conditional Variational Models: Too Complex for Multiagent Trajectories?",
abstract = "Recent advances in modeling multiagent trajectories combine graph architectures such as graph neural networks (GNNs) with conditional variational models (CVMs) such as variational RNNs (VRNNs). Originally, CVMs have been proposed to facilitate learning with multi-modal and structured data and thus seem to perfectly match the requirements of multi-modal multiagent trajectories with their structured output spaces. Empirical results of VRNNs on trajectory data support this assumption. In this paper, we revisit experiments and proposed architectures with additional rigour, ablation runs and baselines. In contrast to common belief, we show that prior results with CVMs on trajectory data might be misleading. Given a neural network with a graph architecture and/or structured output function, variational autoencoding does not seem to contribute statistically significantly to empirical performance. Instead, we show that well-known emission functions do contribute, while coming with less complexity, engineering and computation time. ",
keywords = "Informatics, Business informatics",
author = "Yannick Rudolph and Ulf Brefeld and Uwe Dick",
note = "Publisher Copyright: {\textcopyright} Proceedings of Machine Learning Research 2020.; 34rd Conference on Neural Information Processing Systems - NeurIPS 2020 : Neural Information Processing Systems Online Conference 2020 , NeurIPS 2020 ; Conference date: 06-12-2020 Through 12-12-2020",
year = "2020",
language = "English",
volume = "137",
pages = "136--147",
journal = "Proceedings of Machine Learning Research",
issn = "2640-3498",
publisher = "MLResearch Press",
url = "https://neurips.cc/virtual/2020/public/index.html, https://proceedings.mlr.press/v137/",

}

RIS

TY - JOUR

T1 - Graph Conditional Variational Models: Too Complex for Multiagent Trajectories?

AU - Rudolph, Yannick

AU - Brefeld, Ulf

AU - Dick, Uwe

N1 - Conference code: 34

PY - 2020

Y1 - 2020

N2 - Recent advances in modeling multiagent trajectories combine graph architectures such as graph neural networks (GNNs) with conditional variational models (CVMs) such as variational RNNs (VRNNs). Originally, CVMs have been proposed to facilitate learning with multi-modal and structured data and thus seem to perfectly match the requirements of multi-modal multiagent trajectories with their structured output spaces. Empirical results of VRNNs on trajectory data support this assumption. In this paper, we revisit experiments and proposed architectures with additional rigour, ablation runs and baselines. In contrast to common belief, we show that prior results with CVMs on trajectory data might be misleading. Given a neural network with a graph architecture and/or structured output function, variational autoencoding does not seem to contribute statistically significantly to empirical performance. Instead, we show that well-known emission functions do contribute, while coming with less complexity, engineering and computation time.

AB - Recent advances in modeling multiagent trajectories combine graph architectures such as graph neural networks (GNNs) with conditional variational models (CVMs) such as variational RNNs (VRNNs). Originally, CVMs have been proposed to facilitate learning with multi-modal and structured data and thus seem to perfectly match the requirements of multi-modal multiagent trajectories with their structured output spaces. Empirical results of VRNNs on trajectory data support this assumption. In this paper, we revisit experiments and proposed architectures with additional rigour, ablation runs and baselines. In contrast to common belief, we show that prior results with CVMs on trajectory data might be misleading. Given a neural network with a graph architecture and/or structured output function, variational autoencoding does not seem to contribute statistically significantly to empirical performance. Instead, we show that well-known emission functions do contribute, while coming with less complexity, engineering and computation time.

KW - Informatics

KW - Business informatics

UR - http://www.scopus.com/inward/record.url?scp=85163213279&partnerID=8YFLogxK

M3 - Conference article in journal

VL - 137

SP - 136

EP - 147

JO - Proceedings of Machine Learning Research

JF - Proceedings of Machine Learning Research

SN - 2640-3498

T2 - 34rd Conference on Neural Information Processing Systems - NeurIPS 2020

Y2 - 6 December 2020 through 12 December 2020

ER -

Recently viewed

Publications

  1. Managing Business Process in Distributed Systems: Requirements, Models, and Implementation
  2. Joint entity and relation linking using EARL
  3. Learning Rotation Sensitive Neural Network for Deformed Objects' Detection in Fisheye Images
  4. Dynamic adjustment of dispatching rule parameters in flow shops with sequence-dependent set-up times
  5. Evaluating the construct validity of Objective Personality Tests using a multitrait-multimethod-Multioccasion-(MTMM-MO)-approach
  6. Analyzing different types of moderated method effects in confirmatory factor models for structurally different methods
  7. A coding scheme to analyse global text processing in computer supported collaborative learning: What eye movements can tell us
  8. Binary Random Nets I
  9. Using Natural Language Processing Techniques to Tackle the Construct Identity Problem in Information Systems Research
  10. Ant colony optimization algorithm and artificial immune system applied to a robot route
  11. Development of a Didactic Graphical Simulation Interface on MATLAB for Systems Control
  12. Graph Conditional Variational Models: Too Complex for Multiagent Trajectories?
  13. Analysis of Complexity Reduction in Kalman Filters Through Decoupling Control With Chattered Inputs in PMSM
  14. Towards a Dynamic Interpretation of Subjective and Objective Values
  15. Using protochirons for three-dimensional coding of certain chemical structures.
  16. Adaptive and Dynamic Feedback Loops between Production System and Production Network based on the Asset Administration Shell
  17. Predicting the Difficulty of Exercise Items for Dynamic Difficulty Adaptation in Adaptive Language Tutoring
  18. The Scalable Question Answering Over Linked Data (SQA) Challenge 2018
  19. A Lightweight Simulation Model for Soft Robot's Locomotion and its Application to Trajectory Optimization
  20. Application of non-convex rate dependent gradient plasticity to the modeling and simulation of inelastic microstructure development and inhomogeneous material behavior
  21. Isocodal and isospectral points, edges, and pairs in graphs and how to cope with them in computerized symmetry recognition
  22. On the Power and Performance of a Doubly Latent Residual Approach to Explain Latent Specific Factors in Multilevel-Bifactor-(S-1) Models
  23. Building a process layer for business applications using the blackboard pattern
  24. A discrete approximate solution for the asymptotic tracking problem in affine nonlinear systems
  25. Global text processing in CSCL with learning protocols
  26. Performance and Comfort when Using Motion-Controlled Tools in Complex Tasks
  27. Neural network-based adaptive fault-tolerant control for strict-feedback nonlinear systems with input dead zone and saturation
  28. N3 - A collection of datasets for named entity recognition and disambiguation in the NLP interchange format
  29. Comparing the Sensitivity of Social Networks, Web Graphs, and Random Graphs with Respect to Vertex Removal
  30. Optimal trajectory generation using MPC in robotino and its implementation with ROS system
  31. Multi-Parallel Sending Coils for Movable Receivers in Inductive Charging Systems
  32. On the Nonlinearity Compensation in Permanent Magnet Machine Using a Controller Based on a Controlled Invariant Subspace
  33. Paraphrasing Method for Controlling a Robotic Arm Using a Large Language Model
  34. Anomaly detection in formed sheet metals using convolutional autoencoders
  35. A Multilevel CFA-MTMM Model for Nested Structurally Different Methods
  36. Selection and Recognition of Statistically Defined Signals in Learning Systems