40th anniversary special article series

rigorous evaluation – then & now

In October of 1977 (just two months after Metis’s incorporation), the Joint Dissemination Review Panel (funded by the U.S. Department of Health, Education and Welfare, the National Institute of Education, and the U.S. Office of Education) published the Ideabook (G. Kasten Tallmadge, RMC Research Corporation, Mountain View California). The Ideabook was prepared in order to provide guidance to practitioners about ways to gather “convincing” evidence about the effectiveness of educational innovations – many of which were supported by Title I of the Elementary and Secondary Education Act (ESEA). Clearly, what passed for “convincing” in those days would today fall far short of the rigorous standards promulgated by the What Works Clearinghouse (WWC), an initiative of the US Department of Education’s Institute of Education Sciences.

Forty years ago norm-referenced evaluations of Title I interventions relied almost exclusively on pre-intervention and post-intervention comparisons of treated children’s percentile ranks. For example, if a treated group’s average reading pretest performance corresponded to the 20th percentile, and the same group’s average posttest corresponded to the 30th percentile, then the group was considered to have made a 10 percentile-rank improvement that was attributable to the intervention. Assuming that the actual calculations used equal-interval scales (e.g., normal curve equivalents), the mean differences were then tested for statistical significance, and, if significant mean differences exceeded a particular threshold (e.g., a third of a standard deviation was the most commonly used “rule of thumb”) then the intervention was considered to be educationally meaningful.

These days, current standards for evidence hinge on more rigorous methodologies to determine what would have happened in the absence of treatment. As such, most evidence-based governing bodies (such as the WWC) focus on adequate comparisons for treatment effects, particularly to establish that comparisons looked just like the “treated” before intervention. The highest evidence standards are reserved for those methods that give experiment subjects an equal chance to be treated or not treated – often referred to as Randomized Control Trials (RCTs). However, other evaluation designs can meet evidence standards, albeit with some reservations primarily due to the inability to control for unobserved factors, such as motivation and self-selection bias. These other evaluation designs (as well as poorly implemented RCTs and RCTs with high attrition) require a demonstration of baseline equivalence – that treatment and comparison groups were demonstrably similar on observable characteristics prior to intervention.

Due to these more rigorous definitions of evidence, there has been a shift to more emphasis (and financial support) on implementing evidence-based practices that achieve high standards of evidence in support of social services reform efforts – especially if such practices are implemented with a high degree of fidelity. And why not? If they are available and situationally appropriate, “proven” practices should always be preferred over unproven ones, and the body of evidence that distinguishes them from the unproven variety must be as rigorous as possible.

Directed by Dr. Jing Zhu, one of only 300 certified WWC reviewers nationally, Metis currently works with diverse clients to provide rigorous evaluation and support services, helping them to design and implement carefully controlled research designs that can establish an evidence base for their initiatives.

client spotlights

Arkansas Community Colleges

In 2015, Metis conducted an evaluation of the Career Pathways Initiative (CPI) to determine what student success, social impacts, or economic benefits result to the individual (and the state) from an investment of federal Temporary Assistance to Needy Families (TANF) dollars. Administered by the Arkansas Department of Higher Education across the state, CPI provides case management, education, and training to low-income parents to prepare them for employment in higher-wage industries by helping them acquire certificates and degrees. Using a propensity score matching (PSM) procedure, Metis identified equivalent groups of college students and TANF recipients and CPI participants. Analyses comparing CPI participant outcomes to those of the matched comparison students showed that the program had positive impacts on academic and economic outcomes, as well as a positive return on investment (ROI) to individuals and the state. Metis is currently collecting qualitative data to contextualize the quantitative findings. More information can be found here.

Single Stop USA

Single Stop USA is dedicated to reducing poverty and helping low-income individuals achieve economic security. Recognizing that financial need and lack of critical resources, such as transportation and child care, can reduce students’ success in college, Single Stop launched its Community College Initiative in 2009. Metis is currently conducting a rigorous quasi-experimental impact study examining Single Stop’s impact on the academic achievement of students at the Community College of Philadelphia, which serves over 28,000 students at its Philadelphia campus. Using a well-matched comparison group design based on PSM techniques, the impact analyses of near-term outcomes found that Single Stop participants persist at a higher rate from semester to semester, have a higher ratio of completed-to-attempted credits and higher GPAs than similarly situated students. The same methodology will be employed for the impact analyses of the intermediate-term and long-term outcomes, which will be included in a final report due out in September 2018. For more details, please see report.

The Inner Resilience Program

The Inner Resilience Program was established in 2002 to offer transformative professional development designed to nurture the social, emotional, and inner lives of teachers and students in the wake of 9/11. In spring 2006, the program received funding from the Fetzer Institute to conduct research to examine the impact of the program on the well-being of teachers and their students. Metis conducted a cluster RCT study in which 57 teachers (and their students) of Grades 3–5 from NYC public schools were randomly assigned to the treatment group and control group. Teachers in the treatment group participated in the Inner Resilience Program during the 2007–2008 school year. Treatment included a series of weekly yoga classes, monthly Nurturing the Inner Life classes, a weekend residential retreat, and training and support in the use of a curriculum module for students. The analysis results indicated the program had significant positive impacts on teacher wellness, including reduced stress levels, increased levels of attention and mindfulness, and greater perceived relational trust among treatment teachers. In addition, the program had a significant positive impact on reducing 3rd- and 4th-grade students’ frustration levels. For more details, please see report.

City University of New York

The City University of New York’s Accelerated Study in Associate Programs (CUNY ASAP) strives to help low-income students complete associate degrees on time and to go on to jobs with career potential or to transfer to four-year colleges. Metis was asked to conduct a peer review of the analytic methods and conclusions presented in CUNY’s initial draft evaluation. Metis provided technical assistance to CUNY on the use of a recently-developed, more rigorous and appropriate matching methodology (optimal propensity score matching). By applying the matching technique, Metis and CUNY successfully established a closely-matched comparison group that enabled CUNY to better discern and defend the program’s impact on ASAP students with a high level of internal validity. Based on post-matching outcome analyses, ASAP students had significantly higher graduation and credit accumulation rates than their comparison counterparts. More information can be found here.