Ajou University repository

Cross-Attention Model for Multi-modal Bio-Signal Processing
Citations

SCOPUS

0

Citation Export

DC Field Value Language
dc.contributor.authorHeesoo, Son-
dc.contributor.authorSangseok, Lee-
dc.contributor.authorSael, Lee-
dc.date.issued2022-01-01-
dc.identifier.urihttps://aurora.ajou.ac.kr/handle/2018.oak/36785-
dc.identifier.urihttps://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85127581639&origin=inward-
dc.description.abstractIn order to comprehensively predict human condition, it is beneficial to analyze various bio-signals obtained from human body. Existing multi-modal deep sequence models are often very complex models that involve significantly more parameters than single-modal models. However, since the number of multi-modal signal data is small compared to single-modal, more efforts is needed to reduce the model complexity of multi-modal models. We introduce a multi-modal sequence classification model based on our cross-attention blocks, which aims to reduce the number of parameters involved in cross-referencing different modes. We compare our method with baseline sequential deep learning models, LSTM and Transformer, and a competitor. We test our methods on two public datasets and a dataset obtained from construction works. We show that our model outperforms compared methods in accuracy and number of parameters when the number of modes increases.-
dc.description.sponsorshipThis work was supported in part by the National Research Foundation of Korea grant funded by the Korean government (2018R1A5A1060031). (Corresponding author: Lee Sael.)-
dc.language.isoeng-
dc.publisherInstitute of Electrical and Electronics Engineers Inc.-
dc.subject.meshAttention model-
dc.subject.meshBiosignal processing-
dc.subject.meshBiosignals-
dc.subject.meshCross-attention-
dc.subject.meshHuman bodies-
dc.subject.meshHuman conditions-
dc.subject.meshModal models-
dc.subject.meshMulti-modal-
dc.subject.meshSequence models-
dc.subject.meshSingle-modal-
dc.titleCross-Attention Model for Multi-modal Bio-Signal Processing-
dc.typeConference-
dc.citation.conferenceDate2022.1.17. ~ 2022.1.20.-
dc.citation.conferenceName2022 IEEE International Conference on Big Data and Smart Computing, BigComp 2022-
dc.citation.editionProceedings - 2022 IEEE International Conference on Big Data and Smart Computing, BigComp 2022-
dc.citation.endPage46-
dc.citation.startPage43-
dc.citation.titleProceedings - 2022 IEEE International Conference on Big Data and Smart Computing, BigComp 2022-
dc.identifier.bibliographicCitationProceedings - 2022 IEEE International Conference on Big Data and Smart Computing, BigComp 2022, pp.43-46-
dc.identifier.doi10.1109/bigcomp54360.2022.00018-
dc.identifier.scopusid2-s2.0-85127581639-
dc.identifier.urlhttp://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=9736461-
dc.subject.keywordbio-signals-
dc.subject.keywordcross-attention-
dc.subject.keywordmulti-modal analysis-
dc.type.otherConference Paper-
dc.description.isoafalse-
dc.subject.subareaArtificial Intelligence-
dc.subject.subareaComputer Science Applications-
dc.subject.subareaComputer Vision and Pattern Recognition-
dc.subject.subareaInformation Systems and Management-
dc.subject.subareaHealth Informatics-
Show simple item record

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Lee, Sael Image
Lee, Sael이슬
Department of Software and Computer Engineering
Read More

Total Views & Downloads

File Download

  • There are no files associated with this item.