%0 Generic %T MM-GEF: Multi-modal representation meet collaborative filtering %A Hao Wu %A Alejandro Ariza-Casabona %A Bartłomiej Twardowski %A Tri Kurniawan Wijaya %D 2023 %F Hao Wu2023 %O LAMP %O exported from refbase (http://158.109.8.37/show.php?record=3988), last updated on Tue, 30 Jan 2024 15:40:53 +0100 %X In modern e-commerce, item content features in various modalities offer accurate yet comprehensive information to recommender systems. The majority of previous work either focuses on learning effective item representation during modelling user-item interactions, or exploring item-item relationships by analysing multi-modal features. Those methods, however, fail to incorporate the collaborative item-user-item relationships into the multi-modal feature-based item structure. In this work, we propose a graph-based item structure enhancement method MM-GEF: Multi-Modal recommendation with Graph Early-Fusion, which effectively combines the latent item structure underlying multi-modal contents with the collaborative signals. Instead of processing the content feature in different modalities separately, we show that the early-fusion of multi-modal features provides significant improvement. MM-GEF learns refined item representations by injecting structural information obtained from both multi-modal and collaborative signals. Through extensive experiments on four publicly available datasets, we demonstrate systematical improvements of our method over state-of-the-art multi-modal recommendation methods. %9 miscellaneous %U https://arxiv.org/abs/2308.07222 %U http://158.109.8.37/files/WAT2023.pdf