Abstract
Developing the ability to self-assess is a crucial skill for students, as it impacts their academic performance and learning strategies, amongst other areas. Most existing research in this field has concentrated on the exploration of the students’ capacity to accurately assign a score to their work that closely mirrors an expert’s evaluation, typically a teacher’s. Though this process is commonly referred to as self-assessment, a more precise term would be self-assessment scoring accuracy. Our aim is to review what is the average accuracy and what moderators might influence this accuracy. Following PRISMA recommendations, we reviewed 160 articles, including data from 29,352 participants. We analysed 9 factors as possible moderators: (1) assessment criteria; (2) use of rubric; (3) self-assessment experience; (4) feedback; (5) content knowledge; (6) incentive; (7) formative assessment; (8) field of knowledge; and (9) educational level. The results showed an overall effect of students’ overestimation (g = 0.206) with an average relationship of z = 0.472 between students’ estimation and the expert’s measure. The overestimation diminishes when students receive feedback, possess greater self-assessment experience and content knowledge, when the assessment does not have formative purposes, and in younger students (primary and secondary education). Importantly, the studies analysed exhibited significant heterogeneity and lacked crucial methodological information.