Ajou University repository

DiffProsody: Diffusion-Based Latent Prosody Generation for Expressive Speech Synthesis With Prosody Conditional Adversarial Trainingoa mark
Citations

SCOPUS

12

Citation Export

DC Field Value Language
dc.contributor.authorOh, Hyung Seok-
dc.contributor.authorLee, Sang Hoon-
dc.contributor.authorLee, Seong Whan-
dc.date.issued2024-01-01-
dc.identifier.issn2329-9304-
dc.identifier.urihttps://aurora.ajou.ac.kr/handle/2018.oak/38072-
dc.identifier.urihttps://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85192174099&origin=inward-
dc.description.abstractExpressive text-to-speech systems have undergone significant advancements owing to prosody modeling, but conventional methods can still be improved. Traditional approaches have relied on the autoregressive method to predict the quantized prosody vector; however, it suffers from the issues of long-term dependency and slow inference. This study proposes a novel approach called DiffProsody in which expressive speech is synthesized using a diffusion-based latent prosody generator and prosody conditional adversarial training. Our findings confirm the effectiveness of our prosody generator in generating a prosody vector. Furthermore, our prosody conditional discriminator significantly improves the quality of the generated speech by accurately emulating prosody. We use denoising diffusion generative adversarial networks to improve the prosody generation speed. Consequently, DiffProsody is capable of generating prosody 16 times faster than the conventional diffusion model. The superior performance of our proposed method has been demonstrated via experiments.-
dc.language.isoeng-
dc.publisherInstitute of Electrical and Electronics Engineers Inc.-
dc.subject.meshConventional methods-
dc.subject.meshDe-noising-
dc.subject.meshDenoising diffusion model-
dc.subject.meshDiffusion model-
dc.subject.meshExpressive speech synthesis-
dc.subject.meshProsody generations-
dc.subject.meshProsody generators-
dc.subject.meshProsody modeling-
dc.subject.meshText to speech-
dc.subject.meshText-to-speech system-
dc.titleDiffProsody: Diffusion-Based Latent Prosody Generation for Expressive Speech Synthesis With Prosody Conditional Adversarial Training-
dc.typeArticle-
dc.citation.endPage2666-
dc.citation.startPage2654-
dc.citation.titleIEEE/ACM Transactions on Audio Speech and Language Processing-
dc.citation.volume32-
dc.identifier.bibliographicCitationIEEE/ACM Transactions on Audio Speech and Language Processing, Vol.32, pp.2654-2666-
dc.identifier.doi10.1109/taslp.2024.3395994-
dc.identifier.scopusid2-s2.0-85192174099-
dc.identifier.urlhttp://ieeexplore.ieee.org/servlet/opac?punumber=6570655-
dc.subject.keyworddenoising diffusion model-
dc.subject.keywordgenerative adversarial networks-
dc.subject.keywordprosody modeling-
dc.subject.keywordspeech synthesis-
dc.subject.keywordText-to-speech-
dc.type.otherArticle-
dc.identifier.pissn23299290-
dc.description.isoatrue-
dc.subject.subareaComputer Science (miscellaneous)-
dc.subject.subareaAcoustics and Ultrasonics-
dc.subject.subareaComputational Mathematics-
dc.subject.subareaElectrical and Electronic Engineering-
Show simple item record

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Lee, Sang-Hoon Image
Lee, Sang-Hoon이상훈
Department of Software and Computer Engineering
Read More

Total Views & Downloads

File Download

  • There are no files associated with this item.