Adaptive Explainable AI: Personalizing Machine Explanations Based on User Expertise Levels

Authors

  • Asma Ahmed A. Mohammed Department of Computer Science, University of Tabuk, Tabuk, Saudi Arabia

DOI:

https://doi.org/10.63332/joph.v5i7.2793

Keywords:

Explainable Artificial Intelligence (XAI), Adaptive XAI, Framework, User Expertise, Decision Making, Accuracy

Abstract

Explainable Artificial Intelligence (XAI) is critical for bridging the gap between opaque machine decision-making and human comprehension. Despite advances, most XAI systems deliver static explanations that fail to account for users' diverse expertise levels. This study proposes an adaptive XAI framework that personalizes explanations according to individual user expertise. Grounded in cognitive load theory and trust calibration principles, the system dynamically adjusts explanation complexity, depth, and modality. Through a mixed-methods experimental design involving 150 participants classified as novices, intermediates, and experts, results show that adaptive explanations significantly enhance understanding (+27%), trust calibration (+19%), and decision-making accuracy (+22%) compared to static explanations. The findings provide strong empirical support for user-centered XAI models and offer actionable design guidelines for adaptive explainability in critical sectors such as healthcare, finance, and education.

Downloads

Published

2025-07-02

How to Cite

Mohammed, A. A. A. (2025). Adaptive Explainable AI: Personalizing Machine Explanations Based on User Expertise Levels. Journal of Posthumanism, 5(7), 317–334. https://doi.org/10.63332/joph.v5i7.2793

Issue

Section

Articles