Towards explainable artificial intelligence through expert-augmented supervised feature selection

dc.contributor.authorRabiee, M.
dc.contributor.authorMirhashemi, M.
dc.contributor.authorPangburn, M.S.
dc.contributor.authorPiri, S.
dc.contributor.authorDelen, D.
dc.date.accessioned2024-05-19T14:33:57Z
dc.date.available2024-05-19T14:33:57Z
dc.date.issued2024
dc.departmentİstinye Üniversitesien_US
dc.description.abstractThis paper presents a comprehensive framework for expert-augmented supervised feature selection, addressing pre-processing, in-processing, and post-processing aspects of Explainable Artificial Intelligence (XAI). As part of pre-processing XAI, we introduce the Probabilistic Solution Generator through the Information Fusion (PSGIF) algorithm, leveraging ensemble techniques to enhance the exploration and exploitation capabilities of a Genetic Algorithm (GA). Balancing explainability and prediction accuracy, we formulate two multi-objective optimization models that empower expert(s) to specify a maximum acceptable sacrifice percentage. This approach enhances explainability by reducing the number of selected features and prioritizing those considered more relevant from the domain expert's perspective. This contribution aligns with in-processing XAI, incorporating expert opinions into the feature selection process as a multi-objective problem. Traditional feature selection techniques lack the capability to efficiently search the solution space considering our explainability-focused objective function. To overcome this, we leverage the Genetic Algorithm (GA), a powerful metaheuristic algorithm, optimizing its parameters through Bayesian optimization. For post-processing XAI, we present the Posterior Ensemble Algorithm (PEA), estimating the predictive power of features. PEA enables a nuanced comparison between objective and subjective importance, identifying features as underrated, overrated, or appropriately rated. We evaluate the performance of our proposed GAs on 16 publicly available datasets, focusing on prediction accuracy in a single objective setting. Moreover, we test our multi-objective model on a classification dataset to show the applicability and effectiveness of our framework. Overall, this paper provides a holistic and nuanced approach to explainable feature selection, offering decision-makers a comprehensive understanding of feature importance. © 2024 Elsevier B.V.en_US
dc.identifier.doi10.1016/j.dss.2024.114214
dc.identifier.issn0167-9236
dc.identifier.scopus2-s2.0-85189759931en_US
dc.identifier.scopusqualityQ1en_US
dc.identifier.urihttps://doi.org/10.1016/j.dss.2024.114214
dc.identifier.urihttps://hdl.handle.net/20.500.12713/4376
dc.identifier.volume181en_US
dc.indekslendigikaynakScopusen_US
dc.language.isoenen_US
dc.publisherElsevier B.V.en_US
dc.relation.ispartofDecision Support Systemsen_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.snmz20240519_kaen_US
dc.subjectExpert-Augmented Frameworken_US
dc.subjectExplainable Artificial Intelligence (Xaı)en_US
dc.subjectFeature Categorizationen_US
dc.subjectGenetic Algorithmen_US
dc.subjectMachine Learningen_US
dc.subjectSupervised Feature Selectionen_US
dc.titleTowards explainable artificial intelligence through expert-augmented supervised feature selectionen_US
dc.typeArticleen_US

Dosyalar