Citation Export
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Heejin | - |
dc.contributor.author | Sohn, Kyung Ah | - |
dc.date.issued | 2020-01-01 | - |
dc.identifier.uri | https://aurora.ajou.ac.kr/handle/2018.oak/36620 | - |
dc.identifier.uri | https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85113520991&origin=inward | - |
dc.description.abstract | The prevalent approach for unsupervised text style transfer is disentanglement between content and style. However, it is difficult to completely separate style information from the content. Other approaches allow the latent text representation to contain style and the target style to affect the generated output more than the latent representation does. In both approaches, however, it is impossible to adjust the strength of the style in the generated output. Moreover, those previous approaches typically perform both the sentence reconstruction and style control tasks in a single model, which complicates the overall architecture. In this paper, we address these issues by separating the model into a sentence reconstruction module and a style module. We use the Transformer-based autoencoder model for sentence reconstruction and the adaptive style embedding is learned directly in the style module. Because of this separation, each module can better focus on its own task. Moreover, we can vary the style strength of the generated sentence by changing the style of the embedding expression. Therefore, our approach not only controls the strength of the style, but also simplifies the model architecture. Experimental results show that our approach achieves better style transfer performance and content preservation than previous approaches. | - |
dc.description.sponsorship | This research was supported by the National Research Foundation of Korea grant funded by the Korea government (MSIT) (No. NRF-2019R1A2C1006608). | - |
dc.language.iso | eng | - |
dc.publisher | Association for Computational Linguistics (ACL) | - |
dc.subject.mesh | Auto encoders | - |
dc.subject.mesh | Control task | - |
dc.subject.mesh | Embeddings | - |
dc.subject.mesh | Modeling architecture | - |
dc.subject.mesh | Single models | - |
dc.subject.mesh | Text representation | - |
dc.subject.mesh | Transfer performance | - |
dc.title | How Positive Are You: Text Style Transfer using Adaptive Style Embedding | - |
dc.type | Conference | - |
dc.citation.conferenceDate | 2020.12.8. ~ 2020.12.13. | - |
dc.citation.conferenceName | 28th International Conference on Computational Linguistics, COLING 2020 | - |
dc.citation.edition | COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference | - |
dc.citation.endPage | 2125 | - |
dc.citation.startPage | 2115 | - |
dc.citation.title | COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference | - |
dc.identifier.bibliographicCitation | COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference, pp.2115-2125 | - |
dc.identifier.doi | 2-s2.0-85113520991 | - |
dc.identifier.scopusid | 2-s2.0-85113520991 | - |
dc.identifier.url | https://aclanthology.org/2020.coling-main | - |
dc.type.other | Conference Paper | - |
dc.description.isoa | true | - |
dc.subject.subarea | Computer Science Applications | - |
dc.subject.subarea | Computational Theory and Mathematics | - |
dc.subject.subarea | Theoretical Computer Science | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.