Evaluation Review, Ahead of Print.
Purpose:This case study discusses Mathematica’s experience providing large-scale evaluation technical assistance (ETA) to 65 grantees across two cohorts of Teen Pregnancy Prevention (TPP) Program grants. The grantees were required to conduct rigorous evaluations with specific evaluation benchmarks. This case study provides an overview of the TPP grant program, the evaluation requirements, the ETA provider, and other key stakeholders and the ETA provided to the grantees. Finally, it discusses the successes, challenges, and lessons learned from the effort.Conclusion:One important lesson learned is that there are two related evaluation features, strong counterfactuals and insufficient target sample sizes, that funders should attend to prior to selecting awardees because they are not easy to change through ETA. In addition, if focused on particular outcomes (for TPP, the goal was to improve sexual behavior outcomes), the funder should prioritize studies with an opportunity to observe differences in these outcomes across conditions; several TPP grantees served young populations, and sexual behavior outcomes were not observed or were rare, limiting the opportunity to observe impacts. Unless funders are attentive to weaning out evaluations with critical limitations during the funding process, requiring grantees to conduct impact evaluations supported by ETA might unintentionally foster internally valid, yet underpowered studies that show nonsignificant program impacts. The TPP funder was able to overcome some of the limitations of the grantee evaluations by funding additional evidence-building activities, including federally led evaluations and a large meta-analysis of the effort, as part of a broader learning agenda.