Reinforcement learning does not require explicit robot modeling as it learns on its own based on data, but it has temporal and spatial constraints when transferred to real-world environments. In this research, we trained a balancing Furuta pendulum problem, which is difficult to model, in a virtual environment (Unity) and transferred it to the real world. The challenge of the balancing Furuta pendulum problem is to maintain the pendulum's end effector in a vertical position. We resolved the temporal and spatial constraints by performing reinforcement learning in a virtual environment. Furthermore, we designed a novel reward function that enabled faster and more stable problem-solving compared to the two existing reward functions. We validate each reward function by applying it to the soft actor-critic (SAC) and proximal policy optimization (PPO). The experimental result shows that cosine reward function is trained faster and more stable. Finally, SAC algorithm model using a cosine reward function in the virtual environment is an optimized controller. Additionally, we evaluated the robustness of this model by transferring it to the real environment.
This work was supported in part by the Ajou University research fund and in part by the National Research Foundation of Korea (NRF) grant funded by the Korea Ministry of Science and ICT (MSIT) (2022R1A2C2093100) and in part by Korea Environment Industry & Technology Institute (KEITI) through Digital Infrastructure Building Project for Mon toring, Surveying and Evaluating the Environmental Health Program, funded by Korea Ministry of Environment (MOE) (2021003330009).