TY - CONF AU - Kai Wang AU - Xialei Liu AU - Andrew Bagdanov AU - Luis Herranz AU - Shangling Jui AU - Joost Van de Weijer A2 - CVPRW PY - 2022// TI - Incremental Meta-Learning via Episodic Replay Distillation for Few-Shot Image Recognition BT - CVPR 2022 Workshop on Continual Learning (CLVision, 3rd Edition) SP - 3728 EP - 3738 KW - Training KW - Computer vision KW - Image recognition KW - Upper bound KW - Conferences KW - Pattern recognition KW - Task analysis N2 - In this paper we consider the problem of incremental meta-learning in which classes are presented incrementally in discrete tasks. We propose Episodic Replay Distillation (ERD), that mixes classes from the current task with exemplars from previous tasks when sampling episodes for meta-learning. To allow the training to benefit from a large as possible variety of classes, which leads to more gener-alizable feature representations, we propose the cross-task meta loss. Furthermore, we propose episodic replay distillation that also exploits exemplars for improved knowledge distillation. Experiments on four datasets demonstrate that ERD surpasses the state-of-the-art. In particular, on the more challenging one-shot, long task sequence scenarios, we reduce the gap between Incremental Meta-Learning andthe joint-training upper bound from 3.5% / 10.1% / 13.4% / 11.7% with the current state-of-the-art to 2.6% / 2.9% / 5.0% / 0.2% with our method on Tiered-ImageNet / Mini-ImageNet / CIFAR100 / CUB, respectively. L1 - http://158.109.8.37/files/WLB2022.pdf UR - http://dx.doi.org/10.1109/CVPRW56347.2022.00417 N1 - LAMP; 600.147 ID - Kai Wang2022 ER -