Citation Export
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lee, Jae woong | - |
dc.contributor.author | Choi, Minjin | - |
dc.contributor.author | Sael, Lee | - |
dc.contributor.author | Shim, Hyunjung | - |
dc.contributor.author | Lee, Jongwuk | - |
dc.date.issued | 2022-05-01 | - |
dc.identifier.uri | https://dspace.ajou.ac.kr/dev/handle/2018.oak/32653 | - |
dc.description.abstract | Knowledge distillation (KD) is a successful method for transferring knowledge from one model (i.e., teacher model) to another model (i.e., student model). Despite the success of KD in classification tasks, applying KD to recommender models is challenging because of the sparsity of positive feedback, ambiguity of missing feedback, and ranking problem for top-N recommendation. In this paper, we propose a new KD model for collaborative filtering, namely collaborative distillation (CD). Specifically, (1) we reformulate a loss function to deal with the ambiguity of missing feedback. (2) We exploit probabilistic rank-aware sampling for top-N recommendation. (3) To train the proposed model effectively, we develop two training strategies for the student model, called teacher- and student-guided training methods, adaptively selecting the most beneficial feedback from the teacher model. Furthermore, we extend our model using self-distillation, called born-again CD (BACD). That is, the teacher and student models with the same model capacity are trained by using the proposed distillation method. The experimental results demonstrate that CD outperforms the state-of-the-art method by 2.7–33.2% and 2.7–29.9% in hit rate (HR) and normalized discounted cumulative gain (NDCG), respectively. Moreover, BACD improves the teacher model by 3.5–12.0% and 4.9–13.3% in HR and NDCG, respectively. | - |
dc.description.sponsorship | This work was supported by the National Research Foundation of Korea (NRF) (NRF-2018R1A5A1060031 and NRF-2021R1F1A1063843). Also, this work was supported by Institute of Information & communications Technology Planning & evaluation (IITP) funded by the Korea government (MSIT) (No. 2020-0-01821, ICT Creative Consilience Program). | - |
dc.description.sponsorship | This work was supported by the National Research Foundation of Korea (NRF) (NRF-2018R1A5A1060031 and NRF-2021R1F1A1063843). Also, this work was supported by Institute of Information & communications Technology Planning & evaluation (IITP) funded by the Korea government (MSIT) (No. 2020-0-01821, ICT Creative Consilience Program). | - |
dc.language.iso | eng | - |
dc.publisher | Springer Science and Business Media Deutschland GmbH | - |
dc.subject.mesh | Classification tasks | - |
dc.subject.mesh | Data ambiguities | - |
dc.subject.mesh | Data sparsity | - |
dc.subject.mesh | Feedback problems | - |
dc.subject.mesh | Hit rate | - |
dc.subject.mesh | Knowledge distillation | - |
dc.subject.mesh | Ranking problems | - |
dc.subject.mesh | Student Modeling | - |
dc.subject.mesh | Teacher models | - |
dc.subject.mesh | Top-N recommendation | - |
dc.title | Knowledge distillation meets recommendation: collaborative distillation for top-N recommendation | - |
dc.type | Article | - |
dc.citation.endPage | 1348 | - |
dc.citation.startPage | 1323 | - |
dc.citation.title | Knowledge and Information Systems | - |
dc.citation.volume | 64 | - |
dc.identifier.bibliographicCitation | Knowledge and Information Systems, Vol.64, pp.1323-1348 | - |
dc.identifier.doi | 10.1007/s10115-022-01667-8 | - |
dc.identifier.scopusid | 2-s2.0-85128466505 | - |
dc.identifier.url | https://www.springer.com/journal/10115 | - |
dc.subject.keyword | Collaborative filtering | - |
dc.subject.keyword | Data ambiguity | - |
dc.subject.keyword | Data sparsity | - |
dc.subject.keyword | Knowledge distillation | - |
dc.subject.keyword | Top-N recommendation | - |
dc.description.isoa | false | - |
dc.subject.subarea | Software | - |
dc.subject.subarea | Information Systems | - |
dc.subject.subarea | Human-Computer Interaction | - |
dc.subject.subarea | Hardware and Architecture | - |
dc.subject.subarea | Artificial Intelligence | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.