Publication:
Deep Q-Network-Based Path Planning in a Simulated Warehouse Environment with SLAM Map Integration and Dynamic Obstacles

dc.contributor.authorMedagangoda, H
dc.contributor.authorJayawickrama, N
dc.contributor.authorde Silva, R
dc.contributor.authorSamantha K.R.U.U
dc.contributor.authorAbeygunawardhana, P.K.W
dc.date.accessioned2026-03-07T05:12:18Z
dc.date.issued2025-09-19
dc.description.abstractWith the rise of e-Commerce and the evolution of robotic technologies, the focus on autonomous navigation within warehouse environments has increased. This study presents a simulation-based framework for path planning using Deep Q-Networks (DQN) in a warehouse environment modeled with moving obstacles. The proposed solution integrates a prebuilt map of the environment generated using Simultaneous Localization and Mapping (SLAM), which provides prior spatial knowledge of static obstacles. The reinforcement learning model is formulated with a state space derived from grayscale images that combine the static map generated by SLAM and dynamic obstacles in real time. The action space consists of four discrete movements for the agent. A reward shaping strategy includes a distance-based reward and penalty for collisions to encourage goal-reaching and discourage collisions. An epsilon-greedy policy with exponential decay is used to balance exploration and exploitation. This system was implemented in the Robot Operating System (ROS) and Gazebo simulation environment. The agent was trained over 1000 episodes and metrics such as the number of actions executed to reach the goal and the cumulative reward per episode were analyzed to evaluate the convergence of the proposed solution. The results across two goal locations show that incorporating the SLAM map enhances learning stability, with the agent reaching a goal approximately 150 times, nearly double the success rate compared to the baseline without map information, which achieved only 80 successful episodes over the same number of episodes. This indicates faster convergence and reduced exploration overhead due to improved spatial awareness.
dc.identifier.citationH. Medagangoda, N. Jayawickrama, R. de Silva, U. S. K. Rajapaksha, and P. K. Abeygunawardhana, “Deep Q-Network-Based Path Planning in a Simulated Warehouse Environment with SLAM Map Integration and Dynamic Obstacles”, J Robot Control (JRC), vol. 6, no. 5, pp. 2284–2294, Sep. 2025.
dc.identifier.doihttps://doi.org/10.18196/jrc.v6i5.27579
dc.identifier.issn27155056
dc.identifier.urihttps://rda.sliit.lk/handle/123456789/4723
dc.language.isoen
dc.publisherDepartment of Agribusiness, Universitas Muhammadiyah Yogyakarta
dc.relation.ispartofseriesJournal of Robotics and Control (JRC) ; Volume 6 Issue 5 Pages 2284 - 2294
dc.subjectDeep Q-Networks
dc.subjectGazebo Simulation
dc.subjectPath Planning
dc.subjectRobot Operating System
dc.subjectSimultaneous Localization and Mapping
dc.titleDeep Q-Network-Based Path Planning in a Simulated Warehouse Environment with SLAM Map Integration and Dynamic Obstacles
dc.typeArticle
dspace.entity.typePublication

Files

Original bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Deep Q-Network-Based Path Planning in a Simulated Warehouse Environment with SLAM Map Integration and Dynamic Obstacles.pdf
Size:
625 KB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.69 KB
Format:
Item-specific license agreed upon to submission
Description: