PT Unknown AU Alex Gomez-Villa Bartlomiej Twardowski Lu Yu Andrew Bagdanov Joost Van de Weijer TI Continually Learning Self-Supervised Representations With Projected Functional Regularization BT CVPR 2022 Workshop on Continual Learning (CLVision, 3rd Edition) PY 2022 BP 3866 EP 3876 DI 10.1109/CVPRW56347.2022.00432 DE Computer vision; Conferences; Self-supervised learning; Image representation; Pattern recognition AB Recent self-supervised learning methods are able to learn high-quality image representations and are closing the gap with supervised approaches. However, these methods are unable to acquire new knowledge incrementally – they are, in fact, mostly used only as a pre-training phase over IID data. In this work we investigate self-supervised methods in continual learning regimes without any replaymechanism. We show that naive functional regularization,also known as feature distillation, leads to lower plasticity and limits continual learning performance. Instead, we propose Projected Functional Regularization in which a separate temporal projection network ensures that the newly learned feature space preserves information of the previous one, while at the same time allowing for the learning of new features. This prevents forgetting while maintaining the plasticity of the learner. Comparison with other incremental learning approaches applied to self-supervision demonstrates that our method obtains competitive performance indifferent scenarios and on multiple datasets. ER