Citation Export
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Lee, Sang Hyun | - |
| dc.contributor.author | Jung, Yoonjae | - |
| dc.contributor.author | Seo, Seung Woo | - |
| dc.date.issued | 2024-01-01 | - |
| dc.identifier.issn | 1558-0016 | - |
| dc.identifier.uri | https://aurora.ajou.ac.kr/handle/2018.oak/38082 | - |
| dc.identifier.uri | https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85204943863&origin=inward | - |
| dc.description.abstract | Hierarchical reinforcement learning (HRL) incorporates temporal abstraction into reinforcement learning (RL) by explicitly taking advantage of hierarchical structures. Modern HRL typically designs a hierarchical agent composed of a high-level policy and low-level policies. The high-level policy selects which low-level policy to activate at a lower frequency and the activated low-level policy selects an action at each time step. Recent HRL algorithms have achieved performance gains over standard RL algorithms in synthetic navigation tasks. However, these HRL algorithms still cannot be applied to real-world navigation tasks. One of the main challenges is that real-world navigation tasks require an agent to perform safe and interactive behaviors in dynamic environments. In this paper, we propose imagination-augmented HRL (IAHRL) that efficiently integrates imagination into HRL to enable an agent to learn safe and interactive behaviors in real-world navigation tasks. Imagination is to predict the consequences of actions without interactions with actual environments. The key idea behind IAHRL is that the low-level policies imagine safe and structured behaviors, and then the high-level policy infers interactions with surrounding objects by interpreting the imagined behaviors. We also introduce a new attention mechanism that allows the high-level policy to be permutation-invariant to the order of surrounding objects and to prioritize our agent over them. To evaluate IAHRL, we introduce five complex urban driving tasks, which are among the most challenging real-world navigation tasks. The experimental results indicate that IAHRL enables an agent to perform safe and interactive behaviors, achieving higher success rates and lower average episode steps than baselines. | - |
| dc.description.sponsorship | Manuscript received 6 May 2023; revised 23 August 2023, 7 January 2024, and 10 June 2024; accepted 11 August 2024. This work was supported by Korea National Police Agency (KNPA) through the Project \u201CDevelopment of Autonomous Driving Patrol Service for Active Prevention and Response to Traffic Accidents\u201D under Grant RS-2024-00403630. The Associate Editor for this article was S. A. Birrell. (Corresponding author: Seung-Woo Seo.) Sang-Hyun Lee is with the Department of Mobility Engineering, Ajou University, Gyeonggi-do 16499, South Korea (e-mail: sanghyunlee@ajou.ac.kr). | - |
| dc.language.iso | eng | - |
| dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
| dc.subject.mesh | Autonomous driving | - |
| dc.subject.mesh | Autonomous Vehicles | - |
| dc.subject.mesh | Hierarchical reinforcement learning | - |
| dc.subject.mesh | High level policies | - |
| dc.subject.mesh | Interactive behavior | - |
| dc.subject.mesh | Motion-planning | - |
| dc.subject.mesh | Navigation tasks | - |
| dc.subject.mesh | Real-world | - |
| dc.subject.mesh | Reinforcement learning algorithms | - |
| dc.subject.mesh | Reinforcement learnings | - |
| dc.title | Imagination-Augmented Hierarchical Reinforcement Learning for Safe and Interactive Autonomous Driving in Urban Environments | - |
| dc.type | Article | - |
| dc.citation.endPage | 19535 | - |
| dc.citation.number | 12 | - |
| dc.citation.startPage | 19522 | - |
| dc.citation.title | IEEE Transactions on Intelligent Transportation Systems | - |
| dc.citation.volume | 25 | - |
| dc.identifier.bibliographicCitation | IEEE Transactions on Intelligent Transportation Systems, Vol.25 No.12, pp.19522-19535 | - |
| dc.identifier.doi | 10.1109/tits.2024.3457776 | - |
| dc.identifier.scopusid | 2-s2.0-85204943863 | - |
| dc.identifier.url | http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=6979 | - |
| dc.subject.keyword | autonomous driving | - |
| dc.subject.keyword | autonomous vehicles | - |
| dc.subject.keyword | motion planning | - |
| dc.subject.keyword | navigation | - |
| dc.subject.keyword | Reinforcement learning | - |
| dc.subject.keyword | robot learning | - |
| dc.type.other | Article | - |
| dc.identifier.pissn | 15249050 | - |
| dc.description.isoa | true | - |
| dc.subject.subarea | Automotive Engineering | - |
| dc.subject.subarea | Mechanical Engineering | - |
| dc.subject.subarea | Computer Science Applications | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.