Towards explainable artificial intelligence through expert-augmented supervised feature selection
dc.contributor.author | Rabiee, M. | |
dc.contributor.author | Mirhashemi, M. | |
dc.contributor.author | Pangburn, M.S. | |
dc.contributor.author | Piri, S. | |
dc.contributor.author | Delen, D. | |
dc.date.accessioned | 2024-05-19T14:33:57Z | |
dc.date.available | 2024-05-19T14:33:57Z | |
dc.date.issued | 2024 | |
dc.department | İstinye Üniversitesi | en_US |
dc.description.abstract | This paper presents a comprehensive framework for expert-augmented supervised feature selection, addressing pre-processing, in-processing, and post-processing aspects of Explainable Artificial Intelligence (XAI). As part of pre-processing XAI, we introduce the Probabilistic Solution Generator through the Information Fusion (PSGIF) algorithm, leveraging ensemble techniques to enhance the exploration and exploitation capabilities of a Genetic Algorithm (GA). Balancing explainability and prediction accuracy, we formulate two multi-objective optimization models that empower expert(s) to specify a maximum acceptable sacrifice percentage. This approach enhances explainability by reducing the number of selected features and prioritizing those considered more relevant from the domain expert's perspective. This contribution aligns with in-processing XAI, incorporating expert opinions into the feature selection process as a multi-objective problem. Traditional feature selection techniques lack the capability to efficiently search the solution space considering our explainability-focused objective function. To overcome this, we leverage the Genetic Algorithm (GA), a powerful metaheuristic algorithm, optimizing its parameters through Bayesian optimization. For post-processing XAI, we present the Posterior Ensemble Algorithm (PEA), estimating the predictive power of features. PEA enables a nuanced comparison between objective and subjective importance, identifying features as underrated, overrated, or appropriately rated. We evaluate the performance of our proposed GAs on 16 publicly available datasets, focusing on prediction accuracy in a single objective setting. Moreover, we test our multi-objective model on a classification dataset to show the applicability and effectiveness of our framework. Overall, this paper provides a holistic and nuanced approach to explainable feature selection, offering decision-makers a comprehensive understanding of feature importance. © 2024 Elsevier B.V. | en_US |
dc.identifier.doi | 10.1016/j.dss.2024.114214 | |
dc.identifier.issn | 0167-9236 | |
dc.identifier.scopus | 2-s2.0-85189759931 | en_US |
dc.identifier.scopusquality | Q1 | en_US |
dc.identifier.uri | https://doi.org/10.1016/j.dss.2024.114214 | |
dc.identifier.uri | https://hdl.handle.net/20.500.12713/4376 | |
dc.identifier.volume | 181 | en_US |
dc.indekslendigikaynak | Scopus | en_US |
dc.language.iso | en | en_US |
dc.publisher | Elsevier B.V. | en_US |
dc.relation.ispartof | Decision Support Systems | en_US |
dc.relation.publicationcategory | Makale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı | en_US |
dc.rights | info:eu-repo/semantics/closedAccess | en_US |
dc.snmz | 20240519_ka | en_US |
dc.subject | Expert-Augmented Framework | en_US |
dc.subject | Explainable Artificial Intelligence (Xaı) | en_US |
dc.subject | Feature Categorization | en_US |
dc.subject | Genetic Algorithm | en_US |
dc.subject | Machine Learning | en_US |
dc.subject | Supervised Feature Selection | en_US |
dc.title | Towards explainable artificial intelligence through expert-augmented supervised feature selection | en_US |
dc.type | Article | en_US |