PT Unknown AU Aitor Alvarez-Gila Joost Van de Weijer Yaxing Wang Estibaliz Garrote TI MVMO: A Multi-Object Dataset for Wide Baseline Multi-View Semantic Segmentation BT 29th IEEE International Conference on Image Processing PY 2022 DI 10.1109/ICIP46576.2022.9897955 DE multi-view; cross-view; semantic segmentation; synthetic dataset AB We present MVMO (Multi-View, Multi-Object dataset): a synthetic dataset of 116,000 scenes containing randomly placed objects of 10 distinct classes and captured from 25 camera locations in the upper hemisphere. MVMO comprises photorealistic, path-traced image renders, together with semantic segmentation ground truth for every view. Unlike existing multi-view datasets, MVMO features wide baselines between cameras and high density of objects, which lead to large disparities, heavy occlusions and view-dependent object appearance. Single view semantic segmentation is hindered by self and inter-object occlusions that could benefit from additional viewpoints. Therefore, we expect that MVMO will propel research in multi-view semantic segmentation and cross-view semantic transfer. We also provide baselines that show that new research is needed in such fields to exploit the complementary information of multi-view setups 1 . ER