Abstract:
The prediction of multi-agent behavior is crucial for the development of advanced human-robot interactive systems, particularly in domains like self-driving cars. Existing trajectory forecasting methods often lack the incorporation of dynamic constraints and environmental information. In this study, we explore two prominent models, Trajectron++ and Prediction via Graph-based Policy, as foundational elements for an enhanced learning process. Trajectron++ is a modular, graph-structured recurrent model adept at forecasting trajectories for a diverse array of agents, incorporating both agent dynamics and heterogeneous data such as semantic maps. On the other hand, Prediction via Graph-based Policy combines learned discrete policy rollouts with a focused decoder on subsets of the lane graph, ensuring the capture of lateral variability through policy rollouts exploring different goals based on current observations, and longitudinal variability through a latent variable model decoder conditioned on various lane subsets. Our investigation involves a comprehensive error analysis to pinpoint areas of deficiency, providing valuable insights to improve motion forecasting capabilities.