Interpretable Artificial Intelligence in Pedagogical Diagnostics and Math Performance Prediction: A Systematic Review of Applications in International Standardized Assessments

Авторы

  • Diana Dushabaeva Автор

DOI:

https://doi.org/10.5281/zenodo.18431440

Ключевые слова:

artificial intelligence, explainable AI, pedagogical diagnostics, student success prediction, standardized tests, SAT, GRE, PISA, interpretable machine learning, transparency, ethics, SHAP, LIME, attention mechanisms, personalized feedback, trust in AI, education reform, resource allocation, at-risk student identification.

Аннотация

Artificial intelligence is already delivering tangible results in predicting student success. However, a critical challenge
remains: most models function as “black boxes,” leaving teachers without a clear understanding of how decisions
are actually made. This review focuses on explainable artificial intelligence (XAI) methods for forecasting performance on
standardized assessments such as the SAT, GRE, and PISA. Specific tools–including SHAP, LIME, and attention mechanisms–
are examined. The evidence indicates that XAI significantly enhances both assessment accuracy and transparency.
Nevertheless, substantial challenges persist: explanations are often overly technical, and effective classroom
integration requires considerable preparatory work. This review systematizes practical use cases, identifies the strengths
of key methods, and maps concrete implementation barriers, offering value for stakeholders seeking to deploy AI responsibly
in educational diagnostics.

Биография автора

  • Diana Dushabaeva


    Jizzakh state pedagogical university named after
    Abdulla Qadiri, Uzbekistan

Библиографические ссылки

1. Alghamdi, N., & Bayoudh, M. (2025). Explainable AI methods for predicting student grades and improving academic

success. International Journal of Scientific Engineering and Management, 10(3), 1-12.

2. Canning, M., et al. (2025). Explainable artificial intelligence in education and training: Enhancing transparency and

trust. SOULSS Blog Publication.

3. Das, S., et al. (2025). The role of explainable AI in the education field. International Journal of Scientific Advancement

and Technology, 3(1), 45-58.

4. Garcia, R., et al. (2025). Explainable artificial intelligence in education: A systematic review of student performance

prediction models. Transactions on Pedagogy and Machine Learning, Article 2965.

5. Holzinger, A., et al. (2025). Explainable AI in education: Fostering human oversight and shared control. DAAD Brussels

Report.

6. Lundberg, S. M., & Lee, S. I. (2020). A unified approach to interpreting model predictions. Advances in Neural Information

Processing Systems, 30, 4765-4774.

7. Pinto, J. D., et al. (2024). Applications of explainable AI (XAI) in education: A review.

8. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?”: Explaining the predictions of any classifier.

Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135-

1144.

9. Smith, J., et al. (2025). Explainable AI definitions and challenges in education. arXiv preprint arXiv:2504.02910.

Опубликован

2026-01-20

Как цитировать

Interpretable Artificial Intelligence in Pedagogical Diagnostics and Math Performance Prediction: A Systematic Review of Applications in International Standardized Assessments. (2026). MAKTABGACHA VA MAKTAB TA’LIMI JURNALI, 4(1). https://doi.org/10.5281/zenodo.18431440