Explainable AI in Healthcare: Leveraging Machine Learning and Knowledge Representation for Personalized Treatment Recommendations

Authors

  • Md Shafiqul Islam Department of Computer Science, Maharishi International University, Fairfield, Iowa 52557, USA
  • Mia Md Tofayel Gonee Manik College of Business
  • Mohammad Moniruzzaman Department of Computer Science, Maharishi International University, Fairfield, Iowa 52557, USA
  • Abu Saleh Muhammad Saimon Department of Information Technology, Washington University of Science and Technology, Alexandria VA 22314, USA
  • Sharmin Sultana School of Business, International American University, Los Angeles, CA 90010, USA
  • Mohammad Muzahidur Rahman Bhuiyan College of Business, Westcliff University, Irvine, CA 92614, USA
  • Sazzat Hossain School of Business, International American University, Los Angeles, CA 90010, USA
  • Md Kamal Ahmed School of Business, International American University, Los Angeles, CA 90010, USA

DOI:

https://doi.org/10.63332/joph.v5i1.1996

Keywords:

Explainable AI (XAI), Machine Learning, Personalized Treatment Recommendations, Knowledge Representation, Knowledge Graphs, SHAP, Clinical Decision Support Systems, Healthcare AI.

Abstract

In this research, an advanced framework is presented which combines Explainable Artificial Intelligence (XAI), machine learning algorithms and knowledge representation techniques to improve personalized treatment recommendations in healthcare. Random Forest, XGBoost and Deep Neural Networks (DNN) are used in this study to predict optimal treatment plans; thereby, SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations) provides means of explaining models. A method is implemented, which uses knowledge graphs and SNOMED CT and UMLS ontologies for structuring patient data and disease-treatment relationships. Thus, the proposed framework is trained and tested on MIMIC-III and eICU Collaborative Research Database, utilizing over 50,000 patient records to assess its performance. The model performance is evaluated using accuracy, F1-score, AUC-ROC and SHAP scores to measure the model explain ability. Results show a 25% improvement in interpretability ratings of healthcare professionals and a 17.6% improvement in predictive accuracy from traditional AI models to state-of-the-art AI models. This study bridges representation gaps of AI driven recommendations and brings it closer to aid in clinical decision-making, improving transparency and trust in AI assisted healthcare. While integrating knowledge graphs and explainable AI techniques can help improve model performance and clinician adoption, using limited human insights to train AIs can perpetuate biased practices and institutions from linear AI. We will continue future research with real world clinical trials and further expand the framework to also utilize multi-institutional datasets for wider application.

Downloads

Published

2025-01-15

How to Cite

Islam, M. S., Manik, M. M. T. G., Moniruzzaman, M., Saimon, A. S. M., Sultana, S., Bhuiyan, M. M. R., … Ahmed, M. K. (2025). Explainable AI in Healthcare: Leveraging Machine Learning and Knowledge Representation for Personalized Treatment Recommendations. Journal of Posthumanism, 5(1), 1541–1559 . https://doi.org/10.63332/joph.v5i1.1996

Issue

Section

Articles