AI-Driven Intelligent Assessment System for Evaluating Transdisciplinary Competencies in Project-Based Learning: An Empirical and Simulation Study

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The assessment of transdisciplinary competencies—such as critical thinking, collaboration, and creativity—remains a major challenge in higher education, particularly within project-based learning (PBL) environments where learning is complex, dynamic, and context-dependent. Traditional assessment approaches are limited in capturing such competencies due to their reliance on static and isolated measurement methods. This study proposes an AI-driven intelligent assessment system that integrates machine learning techniques with learning analytics to evaluate students’ competencies using multi-source educational data, including academic performance, interaction logs, and collaborative activity indicators. The system employs ensemble models (Random Forest and Gradient Boosting) to generate predictive assessments, supported by explainability mechanisms (SHAP and LIME) to enhance interpretability and instructional usability. To ensure responsible and equitable assessment, the system incorporates embedded mechanisms for bias mitigation and fairness calibration during the prediction process. The proposed framework was evaluated using a dual-method approach: (1) an agent-based simulation involving 500 synthetic learners to examine system performance under controlled conditions, and (2) a pilot empirical study with 47 undergraduate students enrolled in interdisciplinary PBL courses. The results demonstrate significant improvements over a baseline AI model across five key evaluation metrics: accuracy (+ 22.7%), fairness (+ 46.7%), reliability (+ 28.6%), processing efficiency (+ 30.8%), and explainability (+ 65.5%). Empirical findings further confirm the system’s effectiveness, with medium-to-large effect sizes (d = 0.68–0.91) across all measured dimensions. The study contributes to the field of educational technology by presenting a scalable and interpretable AI-based assessment system that enhances both the quality and fairness of evaluation in complex learning environments. The findings highlight the potential of integrating AI-driven analytics with responsible design principles to support data-informed educational decision-making.

Article activity feed