Adaptive Explainable AI: Personalizing Machine Explanations Based on User Expertise Levels
DOI:
https://doi.org/10.63332/joph.v5i7.2793Keywords:
Explainable Artificial Intelligence (XAI), Adaptive XAI, Framework, User Expertise, Decision Making, AccuracyAbstract
Explainable Artificial Intelligence (XAI) is critical for bridging the gap between opaque machine decision-making and human comprehension. Despite advances, most XAI systems deliver static explanations that fail to account for users' diverse expertise levels. This study proposes an adaptive XAI framework that personalizes explanations according to individual user expertise. Grounded in cognitive load theory and trust calibration principles, the system dynamically adjusts explanation complexity, depth, and modality. Through a mixed-methods experimental design involving 150 participants classified as novices, intermediates, and experts, results show that adaptive explanations significantly enhance understanding (+27%), trust calibration (+19%), and decision-making accuracy (+22%) compared to static explanations. The findings provide strong empirical support for user-centered XAI models and offer actionable design guidelines for adaptive explainability in critical sectors such as healthcare, finance, and education.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
CC Attribution-NonCommercial-NoDerivatives 4.0
The works in this journal is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
