Designing a Cost-Effective Field Trial for an Intervention in a Messy System With Multiple, Interacting Variables
Authors: Pamela Mills, Faith Muirhead, Jeanne Weiler, Madeleine Long, William Sweeney

Contents
3. Design, Data & Analysis
Print Poster
« Back to Poster Hall
3. Design, Data & Analysis
Next »

As we scaled up the summer experience involving more classes, more teachers, and more students on the college campuses (the "full treatment" model), the project responded to the various mechanisms in place for formative evaluation. Using standard qualitative methods (ethnography, focus groups, self-report survey data, video taping, clinical interviews) coupled with Regents performance data we modified the summer program each year. On the one hand, these modifications precluded a systematic study of particular variables. On the other hand, the collective findings over the past three summers have enabled us to restrict the possible variables to smaller subset of potentially powerful interventions that can be transported to the high school classroom.

First, the quantitative data suggested that the scale-up of the program produced a reduction in the overall passing rates of all of the sections to approximately 67-70%. While lower than the 90-100% observed in the pilot chemistry classes, these percentages have been consistent for the past three years suggesting a potential ceiling reached with this intervention. These numbers are dramatically better than the statistics usually observed in the standard summer school. Thus the full treatment model produces an effect that is substantial and dramatic and exceeds by a considerable amount the historic norm. School principals and administrators take notice of this outcome but rightfully recognize that the full treatment model is expensive and impossible to sustain or adapt to the academic year.

Second, the qualitative data produced a particularly surprising result. The original hypothesis of the project was that change in quality of instruction (teacher quality) would be the primary factor producing change in student performance. The other variables included in the summer (tutoring, repeated administration of the exam, the culture of success) would be contributing factors but it would be the change in teaching that would produce the most dramatic effect. However, student interviews, focus groups, in class observations, ethnographic studies, item analyses, and clinical interviews led to a couple of strong themes.

Over and over again the tutors were identified as key to student success. With minor exceptions tutors were the first factor mentioned by the students as contributing to their success. However, there were many tutors in a classroom and not all were equally qualified. Despite the variability in the preparation and expertise of the tutors, tutoring was still singled out, by the students, as the single-most important variable.

Second to tutoring was teacher affect - in particular the teacher's apparent "belief in" the student. An environment in which all participants, beginning with the teachers, including the tutors, and eventually including the students themselves, believed the students could succeed was fostered. Outside observers from the NYC Department of Education identified the climate of success as particularly noteworthy and different in the summer program. "All students are engaged," was reported by the Education Update (August 2007). It is important to note that the students claim that this climate was absent during the academic year.

Other important features identified by students and tutors as critical was the weekly administration of the practice exam, a laboratory that was fun and built confidence (but contributed very little to student success on the exam), and location on a college campus. The college campus conferred a serious and scholarly environment on the classroom setting that was also absent from the academic year.

All the variables highlighted by students, except for the location on the college campus, had the potential to be packaged into a relatively inexpensive single intervention that could be integrated into the academic year. To explore out the possibility of achieving substantial results without the college campus we ran a pilot summer school on two high school campuses. Both summer schools were unwilling to consider any dramatic change in their daily structure but were willing to consider integration of some aspects of the MSPinNYC summer school. Thus we were able to include the in-class tutors and the weekly administration of the Regents exams. The quantitative results (40% passing rates) were lower than the college campus but substantially above the historic norm. The schools took notice.

The college campus is clearly a variable that cannot be transported to the high school setting. Removing this variable from consideration is a key factor in producing a sustainable reform. Heartened by the summer success in the high schools in 2007 we have begun to design a field trial, modified for the multi-variable situation, that we believe the data supports as having the potential to produce the summer outcomes. In the meantime, the evaluation and research team continues to explore factors responsible for the success (and failure) of the summer experiences.