Ajou University repository

MildInt: Deep learning-based multimodal longitudinal data integration frameworkoa mark
  • Lee, Garam ;
  • Kang, Byungkon ;
  • Nho, Kwangsik ;
  • Sohn, Kyung Ah ;
  • Kim, Dokyoon
Citations

SCOPUS

48

Citation Export

Publication Year
2019-01-01
Publisher
Frontiers Media S.A.
Citation
Frontiers in Genetics, Vol.10
Keyword
Alzheimer's diseaseData integrationGated recurrent unitMultimodal deep learningPython package
All Science Classification Codes (ASJC)
Molecular MedicineGeneticsGenetics (clinical)
Abstract
As large amounts of heterogeneous biomedical data become available, numerous methods for integrating such datasets have been developed to extract complementary knowledge from multiple domains of sources. Recently, a deep learning approach has shown promising results in a variety of research areas. However, applying the deep learning approach requires expertise for constructing a deep architecture that can take multimodal longitudinal data. Thus, in this paper, a deep learning-based python package for data integration is developed. The python package deep learning-based multimodal longitudinal data integration framework (MildInt) provides the preconstructed deep learning architecture for a classification task. MildInt contains two learning phases: learning feature representation from each modality of data and training a classifier for the final decision. Adopting deep architecture in the first phase leads to learning more task-relevant feature representation than a linear model. In the second phase, linear regression classifier is used for detecting and investigating biomarkers from multimodal data. Thus, by combining the linear model and the deep learning model, higher accuracy and better interpretability can be achieved. We validated the performance of our package using simulation data and real data. For the real data, as a pilot study, we used clinical and multimodal neuroimaging datasets in Alzheimer's disease to predict the disease progression. MildInt is capable of integrating multiple forms of numerical data including time series and non-time series data for extracting complementary features from the multimodal dataset.
ISSN
1664-8021
Language
eng
URI
https://dspace.ajou.ac.kr/dev/handle/2018.oak/30810
DOI
https://doi.org/10.3389/fgene.2019.00617
Fulltext

Type
Article
Funding
The support for this research was provided by NLM R01 LM012535, NIA R03 AG054936, and the Pennsylvania Department of Health (#SAP 4100070267). The department specifically disclaims responsibility for any analyses, interpretations, or conclusions. This work was also supported by the National Research Foundation of Korea grant funded by the Korea government (MSIT) (no. NRF-2019R1A2C1006608).
Show full item record

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Sohn, Kyung-Ah Image
Sohn, Kyung-Ah손경아
Department of Software and Computer Engineering
Read More

Total Views & Downloads

File Download

  • There are no files associated with this item.