Hybrid Motion Prediction for Autonomous Vehicles using GNN-Transformer Architecture

Thumbnail Image

Date

2025

Journal Title

Journal ISSN

Volume Title

Publisher

Institute of Electrical and Electronics Engineers Inc.

Abstract

Accurate perception and scene understanding are pivotal in enabling autonomous vehicles to navigate safely and intelligently. This paper presents an integrated perception module comprising three core subcomponents: real-time object detection using YOLOv5, lane-keeping using a CNN-based steering predictor, and a novel motion prediction architecture based on a hybrid Graph Neural Network (GNN) and Transformer design. The system is deployed and validated within the CARLA simulation environment, with custom data generation pipelines designed to mimic real-world behavioral patterns of nearby agents. The novelty lies in the hybrid GNN-Transformer model, which effectively captures both spatial and temporal interactions of dynamic objects for behavior classification. Experimental results demonstrate a high accuracy of 98.75% in classifying behaviors into four categories: Going, Coming, Crossing, and Stopped. This paper details the architecture, dataset creation, training methodology, and performance evaluation, highlighting the hybrid model's potential to improve trajectory planning modules in autonomous systems.

Description

Keywords

autonomous driving, CARLA simulation, GNN, lane keeping, motion prediction, object detection, perception, transformer

Citation

Endorsement

Review

Supplemented By

Referenced By