Ajou University repository

Cross-Attention Model for Multi-modal Bio-Signal Processing
Citations

SCOPUS

0

Citation Export

Publication Year
2022-01-01
Journal
Proceedings - 2022 IEEE International Conference on Big Data and Smart Computing, BigComp 2022
Publisher
Institute of Electrical and Electronics Engineers Inc.
Citation
Proceedings - 2022 IEEE International Conference on Big Data and Smart Computing, BigComp 2022, pp.43-46
Keyword
bio-signalscross-attentionmulti-modal analysis
Mesh Keyword
Attention modelBiosignal processingBiosignalsCross-attentionHuman bodiesHuman conditionsModal modelsMulti-modalSequence modelsSingle-modal
All Science Classification Codes (ASJC)
Artificial IntelligenceComputer Science ApplicationsComputer Vision and Pattern RecognitionInformation Systems and ManagementHealth Informatics
Abstract
In order to comprehensively predict human condition, it is beneficial to analyze various bio-signals obtained from human body. Existing multi-modal deep sequence models are often very complex models that involve significantly more parameters than single-modal models. However, since the number of multi-modal signal data is small compared to single-modal, more efforts is needed to reduce the model complexity of multi-modal models. We introduce a multi-modal sequence classification model based on our cross-attention blocks, which aims to reduce the number of parameters involved in cross-referencing different modes. We compare our method with baseline sequential deep learning models, LSTM and Transformer, and a competitor. We test our methods on two public datasets and a dataset obtained from construction works. We show that our model outperforms compared methods in accuracy and number of parameters when the number of modes increases.
Language
eng
URI
https://aurora.ajou.ac.kr/handle/2018.oak/36785
https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85127581639&origin=inward
DOI
https://doi.org/10.1109/bigcomp54360.2022.00018
Journal URL
http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=9736461
Type
Conference
Funding
This work was supported in part by the National Research Foundation of Korea grant funded by the Korean government (2018R1A5A1060031). (Corresponding author: Lee Sael.)
Show full item record

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Lee, Sael Image
Lee, Sael이슬
Department of Software and Computer Engineering
Read More

Total Views & Downloads

File Download

  • There are no files associated with this item.