Ajou University repository

Deep Multi-Modal Network Based Automated Depression Severity Estimation
Citations

SCOPUS

31

Citation Export

Publication Year
2023-07-01
Publisher
Institute of Electrical and Electronics Engineers Inc.
Citation
IEEE Transactions on Affective Computing, Vol.14, pp.2153-2167
Keyword
Depressionmulti-modal factorized bilinear poolingspatio-temporal networkstemporal attentive poolingvolume local directional structural pattern
Mesh Keyword
Convolutional neural networkDepressionEncodingsFeatures extractionMulti-modalMulti-modal factorized bilinear poolingStructural patternTemporal attentive poolingThree-dimensional displayVolume local directional structural pattern
All Science Classification Codes (ASJC)
SoftwareHuman-Computer Interaction
Abstract
Depression is a severe mental illness that impairs a person's capacity to function normally in personal and professional life. The assessment of depression usually requires a comprehensive examination by an expert professional. Recently, machine learning-based automatic depression assessment has received considerable attention for a reliable and efficient depression diagnosis. Various techniques for automated depression detection were developed; however, certain concerns still need to be investigated. In this work, we propose a novel deep multi-modal framework that effectively utilizes facial and verbal cues for an automated depression assessment. Specifically, we first partition the audio and video data into fixed-length segments. Then, these segments are fed into the Spatio-Temporal Networks as input, which captures both spatial and temporal features as well as assigns higher weights to the features that contribute most. In addition, Volume Local Directional Structural Pattern (VLDSP) based dynamic feature descriptor is introduced to extract the facial dynamics by encoding the structural aspects. Afterwards, we employ the Temporal Attentive Pooling (TAP) approach to summarize the segment-level features for audio and video data. Finally, the multi-modal factorized bilinear pooling (MFB) strategy is applied to fuse the multi-modal features effectively. An extensive experimental study reveals that the proposed method outperforms state-of-the-art approaches.
ISSN
1949-3045
Language
eng
URI
https://dspace.ajou.ac.kr/dev/handle/2018.oak/32748
DOI
https://doi.org/10.1109/taffc.2022.3179478
Fulltext

Type
Article
Show full item record

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Sohn, Kyung-Ah Image
Sohn, Kyung-Ah손경아
Department of Software and Computer Engineering
Read More

Total Views & Downloads

File Download

  • There are no files associated with this item.