Citation Export
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Seung Min | - |
dc.contributor.author | Yang, Ji Seung | - |
dc.contributor.author | Han, Jae Woong | - |
dc.contributor.author | Koo, Hyung Il | - |
dc.contributor.author | Roh, Tae Hoon | - |
dc.contributor.author | Yoon, Soo Han | - |
dc.date.issued | 2024-12-01 | - |
dc.identifier.issn | 2045-2322 | - |
dc.identifier.uri | https://dspace.ajou.ac.kr/dev/handle/2018.oak/34584 | - |
dc.description.abstract | Early and precise diagnosis of craniosynostosis (CSO), which involves premature fusion of cranial sutures in infants, is crucial for effective treatment. Although computed topography offers detailed imaging, its high radiation poses risks, especially to children. Therefore, we propose a deep-learning model for CSO and suture-line classification using 2D cranial X-rays that minimises radiation-exposure risks and offers reliable diagnoses. We used data comprising 1,047 normal and 277 CSO cases from 2006 to 2023. Our approach integrates X-ray-marker removal, head-pose standardisation, skull-cropping, and fine-tuning modules for CSO and suture-line classification using convolution neural networks (CNNs). It enhances the diagnostic accuracy and efficiency of identifying CSO from X-ray images, offering a promising alternative to traditional methods. Four CNN backbones exhibited robust performance, with F1-scores exceeding 0.96 and sensitivity and specificity exceeding 0.9, proving the potential for clinical applications. Additionally, preprocessing strategies further enhanced the accuracy, demonstrating the highest F1-scores, precision, and specificity. A qualitative analysis using gradient-weighted class activation mapping illustrated the focal points of the models. Furthermore, the suture-line classification model distinguishes five suture lines with an accuracy of > 0.9. Thus, the proposed approach can significantly reduce the time and labour required for CSO diagnosis, streamlining its management in clinical settings. | - |
dc.description.sponsorship | This research was supported by a grant from the Korea Health Technology R&D Project through the Korea Health Industry Development Institute (KHIDI) funded by the Ministry of Health & Welfare, Republic of Korea (Grant no. HR22C1734) and a National Research Foundation of Korea (NRF) grant funded by the Korean Government (MSIT) (Grant no. RS-2023-00253964). This research was supported in part by the Ministry of Science and ICT (MSIT), Korea, under the Information Technology Research Center (ITRC) support program (IITP-2024-2020-0-01461) supervised by the Institute for Information & Communications Technology Planning & Evaluation (IITP). | - |
dc.language.iso | eng | - |
dc.publisher | Nature Research | - |
dc.subject.mesh | Cranial Sutures | - |
dc.subject.mesh | Craniosynostoses | - |
dc.subject.mesh | Deep Learning | - |
dc.subject.mesh | Female | - |
dc.subject.mesh | Humans | - |
dc.subject.mesh | Image Processing, Computer-Assisted | - |
dc.subject.mesh | Infant | - |
dc.subject.mesh | Male | - |
dc.subject.mesh | Neural Networks, Computer | - |
dc.subject.mesh | Skull | - |
dc.subject.mesh | X-Rays | - |
dc.title | Convolutional neural network-based classification of craniosynostosis and suture lines from multi-view cranial X-rays | - |
dc.type | Article | - |
dc.citation.title | Scientific Reports | - |
dc.citation.volume | 14 | - |
dc.identifier.bibliographicCitation | Scientific Reports, Vol.14 | - |
dc.identifier.doi | 10.1038/s41598-024-77550-z | - |
dc.identifier.pmid | 39496759 | - |
dc.identifier.scopusid | 2-s2.0-85208516482 | - |
dc.identifier.url | https://www.nature.com/srep/ | - |
dc.subject.keyword | Convolutional neural network | - |
dc.subject.keyword | Craniosynostosis | - |
dc.subject.keyword | Deep learning | - |
dc.subject.keyword | Skull X-ray | - |
dc.subject.keyword | Suture line | - |
dc.subject.keyword | Transfer learning | - |
dc.description.isoa | true | - |
dc.subject.subarea | Multidisciplinary | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.