Continuous transfer of neural network representational similarity for incremental learning
Author(s): Tian, SS (Tian, Songsong); Li, WJ (Li, Weijun); Ning, X (Ning, Xin); Ran, H (Ran, Hang); Qin, H (Qin, Hong); Tiwari, P (Tiwari, Prayag)
Source: NEUROCOMPUTING Volume: 545 Article Number: 126300 DOI: 10.1016/j.neucom.2023.126300 Early Access Date: MAY 2023 Published: AUG 7 2023
Abstract: The incremental learning paradigm in machine learning has consistently been a focus of academic research. It is similar to the way in which biological systems learn, and reduces energy consumption by avoiding excessive retraining. Existing studies utilize the powerful feature extraction capabilities of pre-trained models to address incremental learning, but there remains a problem of insufficient utiliza-tion of neural network feature knowledge. To address this issue, this paper proposes a novel method called Pre-trained Model Knowledge Distillation (PMKD) which combines knowledge distillation of neu-ral network representations and replay. This paper designs a loss function based on centered kernel align-ment to transfer neural network representations knowledge from the pre-trained model to the incremental model layer-by-layer. Additionally, the use of memory buffer for Dark Experience Replay helps the model retain past knowledge better. Experiments show that PMKD achieved superior perfor-mance on various datasets and different buffer sizes. Compared to other methods, our class incremental learning accuracy reached the best performance. The open-source code is published athttps://github.-com/TianSongS/PMKD-IL.(c) 2023 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
Accession Number: WOS:001001824300001
ISSN: 0925-2312
eISSN: 1872-8286