Citation Export
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Alenazi, Mohammed J.F. | - |
dc.contributor.author | Ali, Jehad | - |
dc.date.issued | 2024-10-01 | - |
dc.identifier.issn | 1874-4907 | - |
dc.identifier.uri | https://aurora.ajou.ac.kr/handle/2018.oak/34239 | - |
dc.identifier.uri | https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85194697634&origin=inward | - |
dc.description.abstract | The demands of diverse applications necessitate specific communication requirements, such as jitter, packet loss ratio (PLR), and latency, at the end-to-end (E2E) level within the physical layer of software-defined networking (SDN). In heterogeneous network environments, E2E routes traverse multiple domains with varying quality-of-service (QoS) classes of traffic, posing challenges in meeting unique E2E QoS criteria. Hence the performance on the E2E path is significant to meet the needs of applications. Hence, the issue of providing services on the E2E route is challenging due to several QoS classes in a domain on the source-to-destination route. This paper presents an SDN architecture leveraging Deep Q-learning (DQL) to ascertain the appropriate QoS class for E2E routes in software-defined heterogeneous networks, thereby reducing overall E2E delay. The proposed model with DQL is compared with benchmark reinforcement learning. The proposed framework is validated using various real Internet topologies such as Abilene, USNet, and OS3E, assessing not only E2E delay but also jitter, PLR, and throughput, showcasing its efficacy in optimizing network performance regarding these performance metrics. | - |
dc.description.sponsorship | The authors extend their appreciation to Researcher Supporting Project number (RSPD2024R582), King Saud University, Riyadh, Saudi Arabia. | - |
dc.language.iso | eng | - |
dc.publisher | Elsevier B.V. | - |
dc.subject.mesh | Deep Q-learning | - |
dc.subject.mesh | Learning schemes | - |
dc.subject.mesh | Packet loss ratio | - |
dc.subject.mesh | Physical layers | - |
dc.subject.mesh | Q-learning | - |
dc.subject.mesh | Quality-of-service | - |
dc.subject.mesh | Reinforcement learnings | - |
dc.subject.mesh | Service class | - |
dc.subject.mesh | Service improvement | - |
dc.subject.mesh | Software-defined networkings | - |
dc.title | An effective deep-Q learning scheme for QoS improvement in physical layer of software-defined networks | - |
dc.type | Article | - |
dc.citation.title | Physical Communication | - |
dc.citation.volume | 66 | - |
dc.identifier.bibliographicCitation | Physical Communication, Vol.66 | - |
dc.identifier.doi | 10.1016/j.phycom.2024.102387 | - |
dc.identifier.scopusid | 2-s2.0-85194697634 | - |
dc.identifier.url | https://www.sciencedirect.com/science/journal/18744907 | - |
dc.subject.keyword | Deep Q-learning | - |
dc.subject.keyword | Physical layer | - |
dc.subject.keyword | Reinforcement learning | - |
dc.subject.keyword | Software-defined networking | - |
dc.type.other | Article | - |
dc.description.isoa | false | - |
dc.subject.subarea | Electrical and Electronic Engineering | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.