Administrative data, or data routinely collected over the course of an agency’s programmatic activities (Yampolskaya, 2018), have enjoyed a surge in popularity among social science researchers in the past few years. This is no less true in child welfare, where administrative data offer a comprehensive, longitudinal, population-level source of information from which to identify risk and protective factors and analyze outcomes that are not subject to attrition, social desirability bias, or underestimation in self-reporting from parents (Brownell & Jutte, 2013). Given the potential benefits of administrative data, the purpose of this note is to describe some guidelines for using administrative data in child welfare research. The guidelines described in this note are grounded in measurement theory as well as lessons we learned from conducting research using administrative data and pertain to the type of research that administrative data are used for (that is, tracking research to oversee performance). Measurement is not a widely discussed topic in the literature on administrative data, and increased discussion of measurement theory and application of its concepts (problems of how concepts are conceptualized, reliability, and validity evidence) could be of great value to researchers hoping to make the most of this data source. We use a case example of an evaluative study of differential response (DR) from British Columbia, Canada, or “family development response,” as it is known there, to illustrate the use of these guidelines. We begin by describing DR as an example construct to highlight some of the lack of clarity surrounding conceptualization as it has been the subject of substantial debate in child welfare practice research (Hughes, Rycus, Saunders-Adams, Hughes, & Hughes, 2013; Piper, 2017).