The advent of smart industry, propelled by the inte-gration of digital technologies and automation, has revolutionized manufacturing and industrial processes. Robotics and artificial intelligence (AI) are at the forefront of this transformation, driving extensive research into robotic automation and motion planning. Traditional motion planning algorithms, such as artificial potential fields, bio-inspired heuristics, and sampling-based methods, often falter in complex environments due to their high computational demands and tendency to produce non-optimal solutions. Reinforcement learning (RL) has emerged as a powerful alternative, offering real-time adaptation and optimal decision-making in dynamic settings. This paper reviews the inherent limitations of classical motion planning approaches and explores contemporary trends in RL-based methods, with a focus on their application in smart industry. It highlights the advantages of RL in enhancing adaptability, efficiency, and robustness, particularly in high-dimensional and dynamic environments. Key discussions include the integration of RL with traditional techniques, the extension of RL applications across various domains, and the role of sensor-based approaches in improving motion control.
This work was supported by the National Research Foundation of Korea (NRF) Grant funded by the Korea Government [Ministry of Science and ICT (Information and Communications Technology) (MSIT)] under Grant RS-2024-00358662.