Ajou University repository

Accelerating Deep Reinforcement Learning Using Human Demonstration Data Based on Dual Replay Buffer Management and Online Frame Skipping
Citations

SCOPUS

0

Citation Export

Publication Year
2019-04-01
Journal
2019 IEEE International Conference on Big Data and Smart Computing, BigComp 2019 - Proceedings
Publisher
Institute of Electrical and Electronics Engineers Inc.
Citation
2019 IEEE International Conference on Big Data and Smart Computing, BigComp 2019 - Proceedings
Keyword
Deep LearningImitation LearningReinforcement Learning
Mesh Keyword
Buffer managementEmpirical experimentsHuman demonstrationsImitation learningReal-world learningReinforcement learning agentReinforcement learning approachTraining process
All Science Classification Codes (ASJC)
Information Systems and ManagementArtificial IntelligenceComputer Networks and CommunicationsInformation Systems
Abstract
Human demonstration data plays an important role in the early stage of deep reinforcement learning to accelerate the training process as well as guiding a reinforcement learning agent to learn complicated policy. However, most of current reinforcement learning approaches with human demonstration data and reward assumes that there is a sufficient amount of high-quality human demonstration data and that is not true for most real-world learning cases where enough amount of experts' demonstration data is always limited. To overcome this limitation, we propose a novel deep reinforcement learning approach with a dual replay buffer management and online frame skipping for human demonstration data sampling. The dual replay buffer consists of a human replay memory, an actor replay memory, and a replay manager. And it can manage two replay buffers with independent sampling policies. We also propose an online frame skipping to fully utilize available human data. During the training period, the frame skipping is performed dynamically to human replay buffer where the all of human data is stored. Two online frame-skipping, namely, FS-ER(Frame Skipping-Experience Replay) and DFS-ER(Dynamic Frame Skipping-Experience Replay) are used to sample data from human replay buffer. We conducted empirical experiments of four popular Atari games and the results show that our proposed two online frame skipping with dual replay memory outperforms existing baselines. Specifically, DFS-ER shows the fastest score increment during the reinforcement learning procedure in three out of four experiments. FS-ER shows the best performance in the other environment that is hard to train a model because of sparse reward.
Language
eng
URI
https://aurora.ajou.ac.kr/handle/2018.oak/36420
https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85064621678&origin=inward
DOI
https://doi.org/10.1109/bigcomp.2019.8679366
Journal URL
http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=8671661
Type
Conference
Funding
This research was jointly supported by the National Research Foundation of Korea (NRF) funded by the MSIT (NRF-2018R1D1A1B07043858, NRF-2018R1D1A1B07049923, and NRF-2015R1C1A1A01054305).
Show full item record

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Oh, Sangyoon Image
Oh, Sangyoon오상윤
Department of Software and Computer Engineering
Read More

Total Views & Downloads

File Download

  • There are no files associated with this item.