%0 Generic %T Characterizing the contribution of dependent features in XAI methods %A Ahmed Salih %A Ilaria Boscolo Galazzo %A Zahra Raisi Estabragh %A Steffen E Petersen %A Gloria Menegaz %A Petia Radeva %D 2023 %F Ahmed Salih2023 %O MILAB %O exported from refbase (http://158.109.8.37/show.php?record=3868), last updated on Wed, 10 Jan 2024 15:14:28 +0100 %X Explainable Artificial Intelligence (XAI) provides tools to help understanding how the machine learning models work and reach a specific outcome. It helps to increase the interpretability of models and makes the models more trustworthy and transparent. In this context, many XAI methods were proposed being SHAP and LIME the most popular. However, the proposed methods assume that used predictors in the machine learning models are independent which in general is not necessarily true. Such assumption casts shadows on the robustness of the XAI outcomes such as the list of informative predictors. Here, we propose a simple, yet useful proxy that modifies the outcome of any XAI feature ranking method allowing to account for the dependency among the predictors. The proposed approach has the advantage of being model-agnostic as well as simple to calculate the impact of each predictor in the model in presence of collinearity. %9 miscellaneous %U https://arxiv.org/abs/2304.01717 %U http://158.109.8.37/files/SBR2023.pdf