Masked autoencoder for multiagent trajectories
Research output: Journal contributions › Journal articles › Research › peer-review
Standard
In: Machine Learning, Vol. 114, No. 2, 44, 2025.
Research output: Journal contributions › Journal articles › Research › peer-review
Harvard
APA
Vancouver
Bibtex
}
RIS
TY - JOUR
T1 - Masked autoencoder for multiagent trajectories
AU - Rudolph, Yannick
AU - Brefeld, Ulf
N1 - Part of 1 collection: Special Issue on Machine Learning in Soccer
PY - 2025
Y1 - 2025
N2 - Automatically labeling trajectories of multiple agents is key to behavioral analyses but usually requires a large amount of manual annotations. This also applies to the domain of team sport analyses. In this paper, we specifically show how pretraining transformer models improves the classification performance on tracking data from professional soccer. For this purpose, we propose a novel self-supervised masked autoencoder for multiagent trajectories to effectively learn from only a few labeled sequences. Our approach builds upon a factorized transformer architecture for multiagent trajectory data and employs a masking scheme on the level of individual agent trajectories. As a result, our model allows for a reconstruction of masked trajectory segments while being permutation equivariant with respect to the agent trajectories. In addition to experiments on soccer, we demonstrate the usefulness of the proposed pretraining approach on multiagent pose data from entomology. In contrast to related work, our approach is conceptually much simpler, does not require handcrafted features and naturally allows for permutation invariance in downstream tasks.
AB - Automatically labeling trajectories of multiple agents is key to behavioral analyses but usually requires a large amount of manual annotations. This also applies to the domain of team sport analyses. In this paper, we specifically show how pretraining transformer models improves the classification performance on tracking data from professional soccer. For this purpose, we propose a novel self-supervised masked autoencoder for multiagent trajectories to effectively learn from only a few labeled sequences. Our approach builds upon a factorized transformer architecture for multiagent trajectory data and employs a masking scheme on the level of individual agent trajectories. As a result, our model allows for a reconstruction of masked trajectory segments while being permutation equivariant with respect to the agent trajectories. In addition to experiments on soccer, we demonstrate the usefulness of the proposed pretraining approach on multiagent pose data from entomology. In contrast to related work, our approach is conceptually much simpler, does not require handcrafted features and naturally allows for permutation invariance in downstream tasks.
KW - Business informatics
KW - Self-supervised learning
KW - Multiagent trajectories
KW - Masked autoencoder
KW - Transformer
KW - Tracking data
KW - Soccer
UR - https://link.springer.com/article/10.1007/s10994-024-06647-3
U2 - 10.1007/s10994-024-06647-3
DO - 10.1007/s10994-024-06647-3
M3 - Journal articles
VL - 114
JO - Machine Learning
JF - Machine Learning
SN - 0885-6125
IS - 2
M1 - 44
ER -