Using P-16 Professional Learning Communities to Facilitate Partnerships and Improve Teaching and Learning
Authors: Judy Monsaas;Sabrina Hessinger

Contents
3. Design, Data & Analysis
Print Poster
« Back to Poster Hall
3. Design, Data & Analysis
Next »

This study uses a mixed method design that includes quantitative and qualitative methodology. Data reported here are from 3 of the 4 PRISM regions during the 2006-07 school year. (One region did not provide sufficient data on PLC participation to be included.) The number of PLCs reported to the evaluation team in the three regions was 281; of those, 70 PLCs had at least one higher education member, typically a faculty member from science and/or mathematics. The number of K-12 participants was 2520 and the number of HE participants was 164. For this research different groups of participants are used to address different research questions.

Research Question 1: For this research question, the measure used was the Inventory of Teaching and Learning (ITAL) developed by two PRISM researchers. The ITAL is a survey designed to measure K-16 faculty members' use of inquiry-based and standards-based teaching and learning practices. The inquiry-based subscale was designed to parallel the Reformed Teacher Observation Protocol (RTOP), an observation instrument developed by the Arizona Collaborative for Excellence in the Preparation of Teachers (Sawada et al., 2000; Adamson et al., 2002). The ITAL measures teachers' self assessment of their use of reformed teaching and learning practices. In addition to scales measuring inquiry-based (reformed) and standards-based teaching and learning practices, a scale assessing traditional practices was also developed. The ITAL was administered to science and mathematics teachers in all 15 PRISM districts via the web in spring 2004, 2005, 2006 and 2007. Principle components analyses were completed and subscales empirically identified (Ellett & Monsaas, 2006).

The ITAL has been administered all science and mathematics teachers in the 15 PRISM districts. Teachers not engaged in PRISM activities serve as controls for PRISM teachers. Two questions relevant to this research are included at the end of the survey. One question asks "Are you currently a participant in a PRISM learning community?" and the other asks "Does your learning community have a higher education faculty member in it?" For this research question, only teachers who responded "yes" to the first question were included. Comparisons were made between those teachers who responded "yes" and "no" to the second question. Findings reported in this proposal are from spring 2006. The spring 2007 data will be analyzed and included in the presentation at the MSP conference.

Research Question 2: Currently school level data are all that are available to the PRISM evaluation team. Since many of the PLCs consist of teachers at one grade level (e.g., all 6th grade science teachers), it is appropriate to look at changes in test scores at the school level. The Georgia Criterion Referenced Competency Test is a curriculum based test administered to students in grades 1-8 in mathematics and 3-8 in science. A new set of standards is being phased in in science and mathematics (and English and social studies). The new curriculum requires new assessments and cut scores, thus, for the grade levels where the new curriculum is being implemented, test data from previous years are not comparable. Preliminary analyses from one region are presented in this paper. The results from spring 2005 and spring 2006 are compared for those schools and grade levels that had PLCs and for which it is appropriate to make pre-post comparisons. Future analyses will include comparisons of PRISM and non-PRISM schools. Propensity score analysis is being used to identify matching non-PRISM districts and schools.

Research Question 3: This research question is addressed primarily through qualitative methods. The primary understanding sought through the qualitative component of the evaluation is to obtain both depth and breadth of individuals' experiences in learning communities as well as other PRISM activities that impact work in learning communities, and in implementing best practices with their students in science and mathematics courses. We are attempting to understand constructions of various groups of participants of the multiple professional cultures from which they come to form the new cultures of the learning communities. These cultures include those of the various institutions of higher education, departments and colleges within the universities, public school districts that are PRISM members in different regions, and different schools within school districts.

The qualitative design for PRISM evaluation included methodological triangulation, data source triangulation, and investigator triangulation. The primary qualitative evaluation tasks are conducted by four external case study evaluators (one for each PRISM region) whose work is supported by four regional internal case study evaluators (Regional Evaluation Liaisons - RELs). Both external and internal evaluators for each region attend many meetings to provide multiple perspectives of actions and interactions of regional participants. The state evaluation team developed surveys that are used across regions so that the regional case studies have comparable sections, although data within sections are often very different.

Qualitative data are analyzed multiple ways. Documents are analyzed using content analysis. Observational and open-ended response data are analyzed using constant comparative analysis. Data are analyzed within region because the implementation of PRISM is so different in each area by each regional leadership team. Data are used to generate a regional case study report at the end of each project year.