| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | 안상현 | - |
| dc.contributor.author | 김동욱 | - |
| dc.contributor.author | 이관우 | - |
| dc.contributor.author | 이기한 | - |
| dc.contributor.author | 박상철 | - |
| dc.date.issued | 2023-06 | - |
| dc.identifier.issn | 2508-4003 | - |
| dc.identifier.uri | https://aurora.ajou.ac.kr/handle/2018.oak/37833 | - |
| dc.identifier.uri | https://www.kci.go.kr/kciportal/ci/sereArticleSearch/ciSereArtiView.kci?sereArticleSearchBean.artiId=ART002963515 | - |
| dc.description.abstract | This study demonstrates that using the Harmony Search Algorithm (HSA) for hyperparameter optimization in Deep Reinforcement Learning (DeepRL) is effective in environments with well designed reward functions. To address the reproducibility issue in DeepRL, the algorithm was modified to adopt the best parameters in each generation independent of the harmony memory consideration rate (HMCR) and to prevent the best parameters from being influenced by the pitch adjustment rate (PAR). The objective function was set as cumulative reward or terminal reward depending on the environment. The PPO algorithm parameters and actor-critic network parameters were optimized in five different environments. The results show that the harmony search algorithm can optimize hyperparameters even in large and complex environments with substantial interactions if the reward function is well-designed. | - |
| dc.language.iso | Kor | - |
| dc.publisher | 한국CDE학회 | - |
| dc.title | 하모니 서치 알고리즘을 이용한 심층 강화학습 하이퍼파라미터 최적화 | - |
| dc.title.alternative | Hyperparameter Optimization of Deep Reinforcement Learning Using Harmony Search Algorithm | - |
| dc.type | Article | - |
| dc.citation.endPage | 106 | - |
| dc.citation.number | 2 | - |
| dc.citation.startPage | 97 | - |
| dc.citation.title | 한국CDE학회 논문집 | - |
| dc.citation.volume | 28 | - |
| dc.identifier.bibliographicCitation | 한국CDE학회 논문집, Vol.28 No.2, pp.97-106 | - |
| dc.identifier.doi | 10.7315/CDE.2023.097 | - |
| dc.subject.keyword | Hyperparameter tuning | - |
| dc.subject.keyword | Optimization | - |
| dc.subject.keyword | Reinforcement learning | - |
| dc.type.other | Article | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.