Abstract
Rater centrality, in which raters overuse middle scores for rating, is a common rater error which can affect test scores and subsequent decisions. Past studies on rater errors have focused on rater severity and inconsistency, neglecting rater centrality. This study proposes a new model within the hierarchical rater model framework to explicitly specify and directly estimate rater centrality in addition to rater severity and inconsistency. Simulations were conducted using the freeware JAGS to evaluate the parameter recovery of the new model and the consequences of ignoring rater centrality. The results revealed that the model had good parameter recovery with small bias, low root mean square errors, and high test score reliability, especially when a fully crossed linking design was used. Ignoring centrality yielded poor item difficulty estimates, person ability estimates, rater errors estimates, and underestimated reliability. We also showcase how the new model can be used, using an empirical example involving English essays in the Advanced Placement exam.