US-LIME: Increasing fidelity in LIME using uncertainty sampling on tabular data

نویسندگانHamid Saadatfar,zeynab kiani zadegan
نشریهNeurocomputing
شماره صفحات127969-127969
شماره سریال597
شماره مجلد1
ضریب تاثیر (IF)3.317
نوع مقالهFull Paper
تاریخ انتشار2024
رتبه نشریهISI
نوع نشریهالکترونیکی
کشور محل چاپایران
نمایه نشریهJCR،Scopus

چکیده مقاله

LIME has gained significant attention as an explainable artificial intelligence algorithm that sheds light on how complex machine learning models make decisions within a specific locality. One of the challenges of LIME is its instability and infidelity in acquiring explanations in multiple runs. This study focuses on improving LIME’s fidelity by presenting a new sampling strategy. The idea is to generate more focused data samples close to the decision boundary and simultaneously close to the original data point (the sample targeted to be explained). Then, these newly concentrated data are used to train a simple and interpretable linear model as an alternative to the original complex model. The approach leads to high-quality and local sample generation and thus improves the overall fidelity of the model while preserving the constancy of the explanations compared to competing methods. The superiority of the proposed method is shown through comprehensive experiments and comparing the results with LIME, LS-LIME, S-LIME, and BayLIME in terms of fidelity while maintaining stability. This method also performs better than BayLIME, SLIME, and LS-LIME algorithms in terms of execution time. In addition, tests related to the effect of kernel width and data increase on stability and fidelity criteria have been performed. Non-dependence on kernel width in fidelity is also one of the strengths of the proposed method.

لینک ثابت مقاله

tags: Explainable Artificial Intelligence; Local Interpretable Model-agnostic Explanations method; Uncertainty sampling; Fidelity; Stability; Sensitivity