%0 Conference Proceedings %T Learning to Remove Rain in Traffic Surveillance by Using Synthetic Data %A Chris Bahnsen %A David Vazquez %A Antonio Lopez %A Thomas B. Moeslund %B 14th International Conference on Computer Vision Theory and Applications %D 2019 %F Chris Bahnsen2019 %O ADAS; 600.118 %O exported from refbase (http://158.109.8.37/show.php?record=3256), last updated on Mon, 07 Dec 2020 14:11:44 +0100 %X Rainfall is a problem in automated traffic surveillance. Rain streaks occlude the road users and degrade the overall visibility which in turn decrease object detection performance. One way of alleviating this is by artificially removing the rain from the images. This requires knowledge of corresponding rainy and rain-free images. Such images are often produced by overlaying synthetic rain on top of rain-free images. However, this method fails to incorporate the fact that rain fall in the entire three-dimensional volume of the scene. To overcome this, we introduce training data from the SYNTHIA virtual world that models rain streaks in the entirety of a scene. We train a conditional Generative Adversarial Network for rain removal and apply it on traffic surveillance images from SYNTHIA and the AAU RainSnow datasets. To measure the applicability of the rain-removed images in a traffic surveillance context, we run the YOLOv2 object detection algorithm on the original and rain-removed frames. The results on SYNTHIA show an 8% increase in detection accuracy compared to the original rain image. Interestingly, we find that high PSNR or SSIM scores do not imply good object detection performance. %K Rain Removal %K Traffic Surveillance %K Image Denoising %U http://www.scitepress.org/DigitalLibrary/Link.aspx?doi=10.5220/0007361301230130 %U http://dx.doi.org/10.5220/0007361301230130 %P 123-130