PT Unknown AU Javier Marin David Vazquez David Geronimo Antonio Lopez TI Learning Appearance in Virtual Scenarios for Pedestrian Detection BT 23rd IEEE Conference on Computer Vision and Pattern Recognition PY 2010 BP 137–144 DI 10.1109/CVPR.2010.5540218 LA English DE Pedestrian Detection; Domain Adaptation AB Detecting pedestrians in images is a key functionality to avoid vehicle-to-pedestrian collisions. The most promising detectors rely on appearance-based pedestrian classifiers trained with labelled samples. This paper addresses the following question: can a pedestrian appearance model learnt in virtual scenarios work successfully for pedestrian detection in real images? (Fig. 1). Our experiments suggest a positive answer, which is a new and relevant conclusion for research in pedestrian detection. More specifically, we record training sequences in virtual scenarios and then appearance-based pedestrian classifiers are learnt using HOG and linear SVM. We test such classifiers in a publicly available dataset provided by Daimler AG for pedestrian detection benchmarking. This dataset contains real world images acquired from a moving car. The obtained result is compared with the one given by a classifier learnt using samples coming from real images. The comparison reveals that, although virtual samples were not specially selected, both virtual and real based training give rise to classifiers of similar performance. ER