%0 Conference Proceedings %T MVMO: A Multi-Object Dataset for Wide Baseline Multi-View Semantic Segmentation %A Aitor Alvarez-Gila %A Joost Van de Weijer %A Yaxing Wang %A Estibaliz Garrote %B 29th IEEE International Conference on Image Processing %D 2022 %F Aitor Alvarez-Gila2022 %O LAMP %O exported from refbase (http://158.109.8.37/show.php?record=3781), last updated on Tue, 25 Apr 2023 15:53:03 +0200 %X We present MVMO (Multi-View, Multi-Object dataset): a synthetic dataset of 116,000 scenes containing randomly placed objects of 10 distinct classes and captured from 25 camera locations in the upper hemisphere. MVMO comprises photorealistic, path-traced image renders, together with semantic segmentation ground truth for every view. Unlike existing multi-view datasets, MVMO features wide baselines between cameras and high density of objects, which lead to large disparities, heavy occlusions and view-dependent object appearance. Single view semantic segmentation is hindered by self and inter-object occlusions that could benefit from additional viewpoints. Therefore, we expect that MVMO will propel research in multi-view semantic segmentation and cross-view semantic transfer. We also provide baselines that show that new research is needed in such fields to exploit the complementary information of multi-view setups 1 . %K multi-view %K cross-view %K semantic segmentation %K synthetic dataset %U https://ieeexplore.ieee.org/document/9897955 %U http://158.109.8.37/files/AWW2022.pdf %U http://dx.doi.org/10.1109/ICIP46576.2022.9897955