Interpretable AI for Precision Brain Tumor Prognosis: A Transparent Machine Learning Approach
Main Article Content
Abstract
Advancements in brain tumor prognosis, especially for glioma, demand a transparent and comprehensive diagnostic framework that not only ensures high accuracy but also fosters interpretability for clinical decision-making. To meet this need, an interpretable artificial intelligence (AI) approach is proposed, combining machine learning (ML) and deep learning (DL) models enriched by explainable artificial intelligence (XAI) techniques. The approach focuses on enhancing prediction accuracy while ensuring the process remains understandable and traceable by medical professionals. Patient-centric data such as clinical histories and genetic profiles are integrated to enable more personalized diagnostics. A multi-stage methodology is adopted, employing multiple feature selection techniques including Vital Feature Selection (FS), Mutual Information FS, Principal Component Analysis (PCA) FS, and Pearson Correlation Coefficient FS. These techniques help in reducing dimensionality and improving model generalization without losing critical predictive markers. A combination of classical ML algorithms and advanced ensemble methods such as the Voting Classifier is utilized to maximize glioma grading accuracy. The Voting Classifier exhibits perfect performance, achieving 100% accuracy using essential features and mutual information-based selection. In contrast, deep learning models, particularly Convolutional Neural Networks (CNNs), achieve commendable results with 91% accuracy when PCA-based features are applied and 90% with Pearson coefficient-based features. The fusion of these techniques under the umbrella of interpretable AI ensures not only high performance but also enables medical experts to understand the decision pathways involved in classification outcomes. This transparency bridges the gap between black-box AI systems and real-world clinical applicability. Overall, the integration of diverse feature selection strategies, patient-specific data, robust machine learning models, and explainable frameworks presents a significant leap toward precise, trustworthy, and interpretable brain tumor prognosis.
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.