Citation Export
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Park, Wongi | - |
dc.contributor.author | Park, Inhyuk | - |
dc.contributor.author | Kim, Sungeun | - |
dc.contributor.author | Ryu, Jongbin | - |
dc.date.issued | 2023-01-01 | - |
dc.identifier.uri | https://aurora.ajou.ac.kr/handle/2018.oak/36950 | - |
dc.identifier.uri | https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85182937924&origin=inward | - |
dc.description.abstract | In real medical data, training samples typically show long-tailed distributions with multiple labels. Class distribution of the medical data has a long-tailed shape, in which the incidence of different diseases is quite varied, and at the same time, it is not unusual for images taken from symptomatic patients to be multi-label diseases. Therefore, in this paper, we concurrently address these two issues by putting forth a robust asymmetric loss on the polynomial function. Since our loss tackles both long-tailed and multi-label classification problems simultaneously, it leads to a complex design of the loss function with a large number of hyper-parameters. Although a model can be highly fine-tuned due to a large number of hyper-parameters, it is difficult to optimize all hyper-parameters at the same time, and there might be a risk of overfitting a model. Therefore, we regularize the loss function using the Hill loss approach, which is beneficial to be less sensitive against the numerous hyper-parameters so that it reduces the risk of overfitting the model. For this reason, the proposed loss is a generic method that can be applied to most medical image classification tasks and does not make the training process more time-consuming. We demonstrate that the proposed robust asymmetric loss performs favorably against the long-tailed with multi-label medical image classification in addition to the various long-tailed single-label datasets. Notably, our method achieves Top-5 results on the CXR-LT dataset of the ICCV CVAMD 2023 competition. We opensource our implementation of the robust asymmetric loss in the public repository: https://github.com/kalelpark/RALoss. | - |
dc.language.iso | eng | - |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
dc.subject.mesh | Asymmetric loss | - |
dc.subject.mesh | Hyper-parameter | - |
dc.subject.mesh | Long tailed learning | - |
dc.subject.mesh | Loss functions | - |
dc.subject.mesh | Medical data | - |
dc.subject.mesh | Medical image classification | - |
dc.subject.mesh | Multi-label classifications | - |
dc.subject.mesh | Multi-labels | - |
dc.subject.mesh | Overfitting | - |
dc.subject.mesh | Training sample | - |
dc.title | Robust Asymmetric Loss for Multi-Label Long-Tailed Learning | - |
dc.type | Conference | - |
dc.citation.conferenceDate | 2023.10.2. ~ 2023.10.6. | - |
dc.citation.conferenceName | 2023 IEEE/CVF International Conference on Computer Vision Workshops, ICCVW 2023 | - |
dc.citation.edition | Proceedings - 2023 IEEE/CVF International Conference on Computer Vision Workshops, ICCVW 2023 | - |
dc.citation.endPage | 2712 | - |
dc.citation.startPage | 2703 | - |
dc.citation.title | Proceedings - 2023 IEEE/CVF International Conference on Computer Vision Workshops, ICCVW 2023 | - |
dc.identifier.bibliographicCitation | Proceedings - 2023 IEEE/CVF International Conference on Computer Vision Workshops, ICCVW 2023, pp.2703-2712 | - |
dc.identifier.doi | 10.1109/iccvw60793.2023.00286 | - |
dc.identifier.scopusid | 2-s2.0-85182937924 | - |
dc.identifier.url | http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=10350357 | - |
dc.subject.keyword | Asymmetric Loss | - |
dc.subject.keyword | Long tailed Learning | - |
dc.subject.keyword | Multi label Classification | - |
dc.type.other | Conference Paper | - |
dc.description.isoa | true | - |
dc.subject.subarea | Artificial Intelligence | - |
dc.subject.subarea | Computer Science Applications | - |
dc.subject.subarea | Computer Vision and Pattern Recognition | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.