Publication: Deep Q-Network-Based Path Planning in a Simulated Warehouse Environment with SLAM Map Integration and Dynamic Obstacles
Type:
Article
Date
2025-09-19
Journal Title
Journal ISSN
Volume Title
Publisher
Department of Agribusiness, Universitas Muhammadiyah Yogyakarta
Abstract
With the rise of e-Commerce and the evolution of robotic technologies, the focus on autonomous navigation within warehouse environments has increased. This study presents a simulation-based framework for path planning using Deep Q-Networks (DQN) in a warehouse environment modeled with moving obstacles. The proposed solution integrates a prebuilt map of the environment generated using Simultaneous Localization and Mapping (SLAM), which provides prior spatial knowledge of static obstacles. The reinforcement learning model is formulated with a state space derived from grayscale images that combine the static map generated by SLAM and dynamic obstacles in real time. The action space consists of four discrete movements for the agent. A reward shaping strategy includes a distance-based reward and penalty for collisions to encourage goal-reaching and discourage collisions. An epsilon-greedy policy with exponential decay is used to balance exploration and exploitation. This system was implemented in the Robot Operating System (ROS) and Gazebo simulation environment. The agent was trained over 1000 episodes and metrics such as the number of actions executed to reach the goal and the cumulative reward per episode were analyzed to evaluate the convergence of the proposed solution. The results across two goal locations show that incorporating the SLAM map enhances learning stability, with the agent reaching a goal approximately 150 times, nearly double the success rate compared to the baseline without map information, which achieved only 80 successful episodes over the same number of episodes. This indicates faster convergence and reduced exploration overhead due to improved spatial awareness.
Description
Keywords
Deep Q-Networks, Gazebo Simulation, Path Planning, Robot Operating System, Simultaneous Localization and Mapping
Citation
H. Medagangoda, N. Jayawickrama, R. de Silva, U. S. K. Rajapaksha, and P. K. Abeygunawardhana, “Deep Q-Network-Based Path Planning in a Simulated Warehouse Environment with SLAM Map Integration and Dynamic Obstacles”, J Robot Control (JRC), vol. 6, no. 5, pp. 2284–2294, Sep. 2025.
