Citation Export
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Yoo, Jaemin | - |
dc.contributor.author | Sael, Lee | - |
dc.date.issued | 2021-01-01 | - |
dc.identifier.uri | https://aurora.ajou.ac.kr/handle/2018.oak/36661 | - |
dc.identifier.uri | https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85111090253&origin=inward | - |
dc.description.abstract | How can we accurately classify feature-based data such that the learned model and results are more interpretable? Interpretability is beneficial in various perspectives, such as in checking for compliance with exiting knowledge and gaining insights from decision processes. To gain in both accuracy and interpretability, we propose a novel tree-structured classifier called Gaussian Soft Decision Trees (GSDT). GSDT is characterized by multi-branched structures, Gaussian mixture-based decisions, and a hinge loss with path regularization. The three key features make it learn short trees where the weight vector of each node is a prototype for data that mapped to the node. We show that GSDT results in the best average accuracy compared to eight baselines. We also perform an ablation study of the various structures of covariance matrix in the Gaussian mixture nodes in GSDT and demonstrate the interpretability of GSDT in a case study of classification in a breast cancer dataset. | - |
dc.description.sponsorship | Acknowledgments. Publication of this article has been funded by the Basic Science Research Program through the National Research Foundation of Korea (2018R1A1A3A0407953, 2018R1A5A1060031). | - |
dc.language.iso | eng | - |
dc.publisher | Springer Science and Business Media Deutschland GmbH | - |
dc.subject.mesh | Branched structures | - |
dc.subject.mesh | Decision process | - |
dc.subject.mesh | Feature-based | - |
dc.subject.mesh | Feature-based classification | - |
dc.subject.mesh | Gaining insights | - |
dc.subject.mesh | Gaussian mixtures | - |
dc.subject.mesh | Interpretability | - |
dc.subject.mesh | Tree-structured | - |
dc.title | Gaussian Soft Decision Trees for Interpretable Feature-Based Classification | - |
dc.type | Conference | - |
dc.citation.conferenceDate | 2021.5.11. ~ 2021.5.14. | - |
dc.citation.conferenceName | 25th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2021 | - |
dc.citation.edition | Advances in Knowledge Discovery and Data Mining - 25th Pacific-Asia Conference, PAKDD 2021, Proceedings | - |
dc.citation.endPage | 155 | - |
dc.citation.startPage | 143 | - |
dc.citation.title | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | - |
dc.citation.volume | 12713 LNAI | - |
dc.identifier.bibliographicCitation | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol.12713 LNAI, pp.143-155 | - |
dc.identifier.doi | 10.1007/978-3-030-75765-6_12 | - |
dc.identifier.scopusid | 2-s2.0-85111090253 | - |
dc.identifier.url | https://www.springer.com/series/558 | - |
dc.subject.keyword | Feature-based classification | - |
dc.subject.keyword | Gaussian mixtures | - |
dc.subject.keyword | Gaussian Soft Decision Trees | - |
dc.subject.keyword | Interpretable machine learning | - |
dc.subject.keyword | Tabular data | - |
dc.type.other | Conference Paper | - |
dc.description.isoa | false | - |
dc.subject.subarea | Theoretical Computer Science | - |
dc.subject.subarea | Computer Science (all) | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.