The Effectiveness of Professional Development on Teacher Best Practices and Student Learning: Conflicting Results From Multimethod/Multisource Data Sets
Authors: Kathleen Bocian, Rosalie Torres, Michael J. Bryant, Cathy Miller, Kimberly Hammond, Michael Rettig, Richard Cardullo

Contents
3. Design, Data & Analysis
Print Poster
« Back to Poster Hall
3. Design, Data & Analysis
Next »

The effect of the PD was evaluated through a wait-list control, matched pair design for the delivery of PD to the relevant grade-level teachers within a school building. Four cohorts of schools within the district were designated, and schools across the cohorts were matched on factors of student SES and achievement in mathematics and English Language Arts. These matched schools were then randomly assigned to either begin PD in Year 1 and Year 3 or serve as a wait-list control (Year 2 and Year 4).

Multi-method, multi-source data were used to evaluate the effect of the PD. Self report data from teachers included pre- and post-surveys of classroom practices in mathematics. The pre-professional development surveys (modeled from the Horizon, Inc. surveys utilized in the NSF evaluation projectLooking Inside the Classroom: A Study of K-12Mathematics and Science Education in the United States (Weiss, et al., 2003), included rating scales and frequency indicators of the implementation of particular strategies, alignment with national, state and district standards, support levels from building administrators and colleagues, and prior experience and training. Post-PD surveys were administered by an external evaluator to teachers after 1 year's exposure to Math ACTS PD and included a subset of questions from the pre-survey to allow calculation of change scores. Pre-PD survey data were also collected from a control group of teachers who had self-selected not to participate in Math ACTS. Post surveys to be collected in January 2008 to allow comparison for the passage of time and other PD activities on teacher classroom practices.

In addition to the teacher as a source for self report, data include teacher content and pedagogical content knowledge change measured through a variety of pre/post tests. Teacher participation indices in PD were also calculated, based on the availability of professional development hours and teacher voluntary participation across these hours. Student source data included annual, group administered, criterion-referenced assessments of mathematics and English Language Arts (California Standards Tests), annual group administered criterion referenced assessments of mathematics problem solving and applications (Math ACTS Extended Standards Tests), and thrice annually group administered criterion referenced tests of mathematics standards (district criterion referenced tests). Both the Math ACTS and the district criterion referenced tests were screened for content validity and cultural or gender bias. In addition, students completed an annual Motivational Assessment Survey that examined the domains of academic press, teacher caring and fairness, student persistence, and competence.

The classroom and teacher served as the data source for mathematics lesson observations. The observation protocol combined elements of Adding It Up and the California Standards for the Teaching Profession and was developed with NSF RETA consultants to reflect the key components of the Math ACTS PD. Observers were trained to 85% inter-rater reliability in real classroom observations. Both control group and treatment teachers participated in the observations with observers blind to the status of the teacher. Observations were conducted annually on a random day within a three-day, mutually agreed upon window. Teacher participation was voluntary and subject to IRB assurances.