Citation Export
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Oh, Hyung Seok | - |
| dc.contributor.author | Lee, Sang Hoon | - |
| dc.contributor.author | Lee, Seong Whan | - |
| dc.date.issued | 2024-01-01 | - |
| dc.identifier.issn | 2329-9304 | - |
| dc.identifier.uri | https://aurora.ajou.ac.kr/handle/2018.oak/38072 | - |
| dc.identifier.uri | https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85192174099&origin=inward | - |
| dc.description.abstract | Expressive text-to-speech systems have undergone significant advancements owing to prosody modeling, but conventional methods can still be improved. Traditional approaches have relied on the autoregressive method to predict the quantized prosody vector; however, it suffers from the issues of long-term dependency and slow inference. This study proposes a novel approach called DiffProsody in which expressive speech is synthesized using a diffusion-based latent prosody generator and prosody conditional adversarial training. Our findings confirm the effectiveness of our prosody generator in generating a prosody vector. Furthermore, our prosody conditional discriminator significantly improves the quality of the generated speech by accurately emulating prosody. We use denoising diffusion generative adversarial networks to improve the prosody generation speed. Consequently, DiffProsody is capable of generating prosody 16 times faster than the conventional diffusion model. The superior performance of our proposed method has been demonstrated via experiments. | - |
| dc.language.iso | eng | - |
| dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
| dc.subject.mesh | Conventional methods | - |
| dc.subject.mesh | De-noising | - |
| dc.subject.mesh | Denoising diffusion model | - |
| dc.subject.mesh | Diffusion model | - |
| dc.subject.mesh | Expressive speech synthesis | - |
| dc.subject.mesh | Prosody generations | - |
| dc.subject.mesh | Prosody generators | - |
| dc.subject.mesh | Prosody modeling | - |
| dc.subject.mesh | Text to speech | - |
| dc.subject.mesh | Text-to-speech system | - |
| dc.title | DiffProsody: Diffusion-Based Latent Prosody Generation for Expressive Speech Synthesis With Prosody Conditional Adversarial Training | - |
| dc.type | Article | - |
| dc.citation.endPage | 2666 | - |
| dc.citation.startPage | 2654 | - |
| dc.citation.title | IEEE/ACM Transactions on Audio Speech and Language Processing | - |
| dc.citation.volume | 32 | - |
| dc.identifier.bibliographicCitation | IEEE/ACM Transactions on Audio Speech and Language Processing, Vol.32, pp.2654-2666 | - |
| dc.identifier.doi | 10.1109/taslp.2024.3395994 | - |
| dc.identifier.scopusid | 2-s2.0-85192174099 | - |
| dc.identifier.url | http://ieeexplore.ieee.org/servlet/opac?punumber=6570655 | - |
| dc.subject.keyword | denoising diffusion model | - |
| dc.subject.keyword | generative adversarial networks | - |
| dc.subject.keyword | prosody modeling | - |
| dc.subject.keyword | speech synthesis | - |
| dc.subject.keyword | Text-to-speech | - |
| dc.type.other | Article | - |
| dc.identifier.pissn | 23299290 | - |
| dc.description.isoa | true | - |
| dc.subject.subarea | Computer Science (miscellaneous) | - |
| dc.subject.subarea | Acoustics and Ultrasonics | - |
| dc.subject.subarea | Computational Mathematics | - |
| dc.subject.subarea | Electrical and Electronic Engineering | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.