Abstract
Background
Mentoring programs pair non-familial adults with children and adolescents for the purposes of promoting positive youth development. Although these programs are widely popular, evaluations tend to show that mentoring programs have, on average, modest effects on youth outcomes. Some researchers have suggested that mentoring programs should homogenize mentoring activities as a means for increasing effect sizes of programs. :
Objective
This paper describes why heterogeneity of mentoring activities should not necessarily be regarded as a problem (i.e., a bug) that needs correction; rather it is more representative of the construct of mentoring as it is popularly understood and also desirable because of the potential to improve access and quality of prevention services (i.e., it is a feature).
Method
We present different simulated scenarios demonstrating how evaluations of mentoring programs may change the estimates of treatment effects depending on how evaluators measure programmatic activities and approach analyses.
Results
Analyses illustrate that commonly used evaluation strategies that treatment effects may be underestimated when mentoring activities are not measured and are paired with common analytic approaches (e.g., intent-to-treat analyses). Simulated scenarios also highlight alternative approaches for defining programmatic elements and evaluating programs to produce a more robust estimate of effects.
Conclusions
The optimal strategy for evaluating mentoring services depends on the particular features of the program as well as the goals of the evaluation. One approach researchers might take is to evaluate specific mentoring practices, before evaluating mentoring programs, to begin to understand program impact.