Evaluation Review, Ahead of Print.
Background:This article offers a case example of how experimental evaluation methods can be coupled with principles of design-based implementation research (DBIR), improvement science (IS), and rapid-cycle evaluation (RCE) methods to provide relatively quick, low-cost, credible assessments of strategies designed to improve programs, policies, or practices.Objectives:This article demonstrates the feasibility and benefits of blending DBIR, IS, and RCE practices with embedded randomized controlled trials (RCTs) to improve the pace and efficiency of program improvement.Research design:This article describes a two-cycle experimental test of staff-designed strategies for improving a workforce development program. Youth enrolled in Year Up’s Professional Training Corps (PTC) programs were randomly assigned to “improvement strategies” designed to boost academic success and persistence through the 6-month learning and development (L&D) phase of the program, when participants spend most of their program-related time in courses offered by partner colleges.Subjects:The study sample includes 317 youth from three PTC program sites.Measures:The primary outcome measures are completion of the program’s L&D phase and continued college enrollment beyond the L&D phase.Results:The improvement strategies designed and tested during the study increased program retention through L&D by nearly 10 percentage points and increased college persistence following L&D by 13 percentage points.Conclusion:Blending DBIR, IS, and RCE principles with a multi-cycle RCT generated highly credible estimates of the efficacy of the tested improvement strategies within a relatively short period of time (18 months) at modest cost and with reportedly low burden for program staff.