Utilizing Artificial Intelligence to Assess ESL Students' Narratives: A Comparative Analysis
DOI:
https://doi.org/10.63332/joph.v5i6.2224Keywords:
Artificial Intelligence (AI), Assessment, ESL Narrative Writing, Comparative AnalysisAbstract
This study investigates the effectiveness, reliability, and potential biases of AI-based assessment tools in evaluating narrative essays written by undergraduate ESL students at a Saudi university. A total of 30 essays were assessed using a detailed rubric covering five writing components: ideas and content, organization, vocabulary, voice and style, and mechanics and formatting. The essays were graded by human evaluators and five AI tools—ChatGPT, Gemini, Claude, Justdone, and Chatsonic. A quantitative comparative research design was employed, and statistical analyses, including one-way ANOVA and correlation tests, were conducted to examine grading consistency and divergence. Results revealed that AI tools aligned more closely with human graders on objective criteria like mechanics and formatting, but showed significant discrepancies in subjective aspects such as voice and style. The study highlights the potential of AI to support human grading but underscores the importance of human oversight to ensure fairness and contextual sensitivity in ESL writing assessment.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
CC Attribution-NonCommercial-NoDerivatives 4.0
The works in this journal is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.