Ajou University repository

Accelerating Deep Reinforcement Learning Using Human Demonstration Data Based on Dual Replay Buffer Management and Online Frame Skipping
Citations

SCOPUS

0

Citation Export

DC Field Value Language
dc.contributor.authorYeo, Sangho-
dc.contributor.authorOh, Sangyoon-
dc.contributor.authorLee, Minsu-
dc.date.issued2019-04-01-
dc.identifier.urihttps://aurora.ajou.ac.kr/handle/2018.oak/36420-
dc.identifier.urihttps://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85064621678&origin=inward-
dc.description.abstractHuman demonstration data plays an important role in the early stage of deep reinforcement learning to accelerate the training process as well as guiding a reinforcement learning agent to learn complicated policy. However, most of current reinforcement learning approaches with human demonstration data and reward assumes that there is a sufficient amount of high-quality human demonstration data and that is not true for most real-world learning cases where enough amount of experts' demonstration data is always limited. To overcome this limitation, we propose a novel deep reinforcement learning approach with a dual replay buffer management and online frame skipping for human demonstration data sampling. The dual replay buffer consists of a human replay memory, an actor replay memory, and a replay manager. And it can manage two replay buffers with independent sampling policies. We also propose an online frame skipping to fully utilize available human data. During the training period, the frame skipping is performed dynamically to human replay buffer where the all of human data is stored. Two online frame-skipping, namely, FS-ER(Frame Skipping-Experience Replay) and DFS-ER(Dynamic Frame Skipping-Experience Replay) are used to sample data from human replay buffer. We conducted empirical experiments of four popular Atari games and the results show that our proposed two online frame skipping with dual replay memory outperforms existing baselines. Specifically, DFS-ER shows the fastest score increment during the reinforcement learning procedure in three out of four experiments. FS-ER shows the best performance in the other environment that is hard to train a model because of sparse reward.-
dc.description.sponsorshipThis research was jointly supported by the National Research Foundation of Korea (NRF) funded by the MSIT (NRF-2018R1D1A1B07043858, NRF-2018R1D1A1B07049923, and NRF-2015R1C1A1A01054305).-
dc.language.isoeng-
dc.publisherInstitute of Electrical and Electronics Engineers Inc.-
dc.subject.meshBuffer management-
dc.subject.meshEmpirical experiments-
dc.subject.meshHuman demonstrations-
dc.subject.meshImitation learning-
dc.subject.meshReal-world learning-
dc.subject.meshReinforcement learning agent-
dc.subject.meshReinforcement learning approach-
dc.subject.meshTraining process-
dc.titleAccelerating Deep Reinforcement Learning Using Human Demonstration Data Based on Dual Replay Buffer Management and Online Frame Skipping-
dc.typeConference-
dc.citation.conferenceDate2019.2.27. ~ 2019.3.2.-
dc.citation.conferenceName2019 IEEE International Conference on Big Data and Smart Computing, BigComp 2019-
dc.citation.edition2019 IEEE International Conference on Big Data and Smart Computing, BigComp 2019 - Proceedings-
dc.citation.title2019 IEEE International Conference on Big Data and Smart Computing, BigComp 2019 - Proceedings-
dc.identifier.bibliographicCitation2019 IEEE International Conference on Big Data and Smart Computing, BigComp 2019 - Proceedings-
dc.identifier.doi10.1109/bigcomp.2019.8679366-
dc.identifier.scopusid2-s2.0-85064621678-
dc.identifier.urlhttp://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=8671661-
dc.subject.keywordDeep Learning-
dc.subject.keywordImitation Learning-
dc.subject.keywordReinforcement Learning-
dc.type.otherConference Paper-
dc.description.isoafalse-
dc.subject.subareaInformation Systems and Management-
dc.subject.subareaArtificial Intelligence-
dc.subject.subareaComputer Networks and Communications-
dc.subject.subareaInformation Systems-
Show simple item record

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Oh, Sangyoon Image
Oh, Sangyoon오상윤
Department of Software and Computer Engineering
Read More

Total Views & Downloads

File Download

  • There are no files associated with this item.