SCOPUS
0Citation Export
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.advisor | 이두형 | - |
| dc.contributor.author | 이한동 | - |
| dc.date.issued | 2024-08 | - |
| dc.identifier.other | 34160 | - |
| dc.identifier.uri | https://aurora.ajou.ac.kr/handle/2018.oak/39297 | - |
| dc.description | 학위논문(박사)--의학과,2024. 8 | - |
| dc.description.abstract | Study Design. Retrospective study. Objective. The aim of this study is to investigate the utility of using a deep learning model for diagnosing and classifying traumatic thoracolumbar fractures in computed tomogram (CT) images. Summary of Background Data. In patients with severe trauma, CT scans have recently been widely used as the first choice for detecting spinal fractures. Although CT scans have high diagnostic accuracy, occasionally, fractures may be missed. Recently, deep learning has been utilized in various medical imaging fields. Methods. The CT images of 480 patients(3695vertebrae) who visited level one trauma center and had thoracolumbar fractures were enrolled and analyzed retrospectively. The diagnostic results of these images were confirmed by two experienced musculoskeletal radiologists and one experienced spine surgeon with magnetic resonance image. Fractures were classified and labeled as vertebral body fracture, transverse process fracture, and posterior element fracture and all fracture lines were manually segm ented. Deep learning networks were used for diagnosis (425 cases for training and 55 cases for testing).The area under the receiver operating characteristic curve (AUROC) were calculated for investigating diagnostic accuracy. Results. The deep learning model's AUROC for spinal fracture was 0.9357. The diagnostic accuracy was highest for transverse process fractures, with AUROC values of 0.9882 (left) and 0.9751 (right). Next, the accuracy for posterior element fractures was also high, with AUROC values of 0.9494. Although the diagnostic accuracy for vertebral body fractures was relatively lower, it still demonstrated high diagnostic accuracy with an AUROC of over 0.9. (AUROC=0.9270). Conclusion. In this study, we were able to confirm that the deep learning model demonstrated high accuracy in diagnosing and classifying traumatic thoracolumbar fractures. In the current model, the diagnostic accuracy using CT scans was highest for transverse process fractures, followed by posterior element fractures, and then vertebral body fractures. This can potentially aid spine specialists, radiologists, and severe trauma experts. Further validation is needed to determine its effectiveness in actual clinical settings. Key words Trauma, vertebral fracture, deep learning, fracture detection, fracture classification | - |
| dc.description.tableofcontents | Ⅰ. Introduction 1_x000D_ <br>Ⅱ. Material and methods · 3_x000D_ <br> A. Study subjects 3_x000D_ <br> B. Preparation and Annotation of Ground Truth Labels · 4_x000D_ <br> C. Deep Learning-Based Algorithm 5_x000D_ <br> D. Statistical analyses 6_x000D_ <br>Ⅲ. Results 7_x000D_ <br> A. Data Characteristics · 7_x000D_ <br> B. Performance of the models 8_x000D_ <br> 1. Per-vertebra level 8_x000D_ <br> 2. Per-segment level 9_x000D_ <br>Ⅳ. Discussion 12_x000D_ <br>Ⅴ. Conclusions · 16_x000D_ <br>VI. References 17_x000D_ | - |
| dc.language.iso | eng | - |
| dc.publisher | The Graduate School, Ajou University | - |
| dc.rights | 아주대학교 논문은 저작권에 의해 보호받습니다. | - |
| dc.title | CT영상을 이용한 외상성 흉요추 골절의 자동화된 진단 및 분류를 위한 심층신경망 | - |
| dc.type | Thesis | - |
| dc.contributor.affiliation | 아주대학교 대학원 | - |
| dc.contributor.alternativeName | HAN DONG LEE | - |
| dc.contributor.department | 일반대학원 의학과 | - |
| dc.date.awarded | 2024-08 | - |
| dc.description.degree | Doctor | - |
| dc.identifier.url | https://dcoll.ajou.ac.kr/dcollection/common/orgView/000000034160 | - |
| dc.subject.keyword | Trauma | - |
| dc.subject.keyword | deep learning | - |
| dc.subject.keyword | fracture classification | - |
| dc.subject.keyword | fracture detection | - |
| dc.subject.keyword | vertebral fracture | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.