Citation Export
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Heesoo, Son | - |
dc.contributor.author | Sangseok, Lee | - |
dc.contributor.author | Sael, Lee | - |
dc.date.issued | 2022-01-01 | - |
dc.identifier.uri | https://aurora.ajou.ac.kr/handle/2018.oak/36785 | - |
dc.identifier.uri | https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85127581639&origin=inward | - |
dc.description.abstract | In order to comprehensively predict human condition, it is beneficial to analyze various bio-signals obtained from human body. Existing multi-modal deep sequence models are often very complex models that involve significantly more parameters than single-modal models. However, since the number of multi-modal signal data is small compared to single-modal, more efforts is needed to reduce the model complexity of multi-modal models. We introduce a multi-modal sequence classification model based on our cross-attention blocks, which aims to reduce the number of parameters involved in cross-referencing different modes. We compare our method with baseline sequential deep learning models, LSTM and Transformer, and a competitor. We test our methods on two public datasets and a dataset obtained from construction works. We show that our model outperforms compared methods in accuracy and number of parameters when the number of modes increases. | - |
dc.description.sponsorship | This work was supported in part by the National Research Foundation of Korea grant funded by the Korean government (2018R1A5A1060031). (Corresponding author: Lee Sael.) | - |
dc.language.iso | eng | - |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
dc.subject.mesh | Attention model | - |
dc.subject.mesh | Biosignal processing | - |
dc.subject.mesh | Biosignals | - |
dc.subject.mesh | Cross-attention | - |
dc.subject.mesh | Human bodies | - |
dc.subject.mesh | Human conditions | - |
dc.subject.mesh | Modal models | - |
dc.subject.mesh | Multi-modal | - |
dc.subject.mesh | Sequence models | - |
dc.subject.mesh | Single-modal | - |
dc.title | Cross-Attention Model for Multi-modal Bio-Signal Processing | - |
dc.type | Conference | - |
dc.citation.conferenceDate | 2022.1.17. ~ 2022.1.20. | - |
dc.citation.conferenceName | 2022 IEEE International Conference on Big Data and Smart Computing, BigComp 2022 | - |
dc.citation.edition | Proceedings - 2022 IEEE International Conference on Big Data and Smart Computing, BigComp 2022 | - |
dc.citation.endPage | 46 | - |
dc.citation.startPage | 43 | - |
dc.citation.title | Proceedings - 2022 IEEE International Conference on Big Data and Smart Computing, BigComp 2022 | - |
dc.identifier.bibliographicCitation | Proceedings - 2022 IEEE International Conference on Big Data and Smart Computing, BigComp 2022, pp.43-46 | - |
dc.identifier.doi | 10.1109/bigcomp54360.2022.00018 | - |
dc.identifier.scopusid | 2-s2.0-85127581639 | - |
dc.identifier.url | http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=9736461 | - |
dc.subject.keyword | bio-signals | - |
dc.subject.keyword | cross-attention | - |
dc.subject.keyword | multi-modal analysis | - |
dc.type.other | Conference Paper | - |
dc.description.isoa | false | - |
dc.subject.subarea | Artificial Intelligence | - |
dc.subject.subarea | Computer Science Applications | - |
dc.subject.subarea | Computer Vision and Pattern Recognition | - |
dc.subject.subarea | Information Systems and Management | - |
dc.subject.subarea | Health Informatics | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.