Skip to main content


Welcome, the Hub connects all projects

Voices From The Field


MSPnet Blog: “Research and practice 2: Data driven to distraction”

See All Blog Posts

posted October 16, 2015 – by Brian Drayton

Data, data, data.  Everybody wants more and better, it seems.  A lot of educational research is aimed at using all this data to change… what?  At this point, it’s hard to see how most of the data has resulted in research that enables improved practice.

In an August post (here), I asked:

 Is the data being collected where you are improving the students’ STEM learning experience?  That is, if John or Judy take a test this year, are the results going to help them next year?  Is it the “system” that is being measured, by sampling the anonymous flow of student achievers every few months, or is STEM education — particular people’s STEM education — getting better?

There were some very interesting comments, but Louise Wilson gave the most direct answer to the question (I excerpt):

Generally, no. The tests are taken, and feedback is within a day in our school, although teachers are not generally trained on how to find the results. But students are assigned to classes without regard to their current skill level (so students in grade level equivalents of 2nd to 11th grade can be found in the same 10th grade math course) and everyone is expected to make the best of it. Nobody gets a better experience from the tests, because sorting students according to current skill level is apparently bad.

Such comments, and they are echoed by many voices in many forums, always raise the question, Who are the data for?  And this can’t be answered by naming some constituency of possible users or beneficiaries.  After the past couple of decades, in which we have developed data collection, instrument design, and other technologies to a high level of sophistication, we have to look at who’s actually using the data, and for what ends.

Before I go on, it’s worth noting that most of the headlines, commission reports, regulations, and expenditures are related to data being collected  on classrooms, by people outside classrooms.  These data then exist as a resource accessible to varying degrees, mostly to agents (people) outside classrooms, who make evaluations, allocations, or other decisions about the insides. Teachers have some access, sometimes, but the trend is to create systems which operate on data, rather than on judgment. (The algorithms, rubrics, and other mechanisms are a crude facsimile of “expert system,” that is, an intelligent software system designed to make inferences and decisions about, say, medical diagnosis or the management of inventories, using knowledge “captured” from actual experts in the field.)

It can be better than this, of course, if the school or district culture is one of teacher learning and engagement for shared pedagogical purposes, rather than external demands. In a research project on inquiry-based science teaching that Joni Falk and I conducted some years ago (see a paper here), we watched as the first Massachusetts high-stakes tests were implemented in two of our study districts, which happened to be adjacent towns.  In one, a long-term movement towards an inquiry approach was largely subverted by anxiety about test scores.  Next door, the district interpreted the testing reform in the light of their long-standing commitment to inquiry, and exerted a lot of ingenuity in seeking ways to use the new regime to reinforce their system, a response made  possible by the system-wide vision of inquiry teaching and learning.

Larry Cuban has posted an interesting reflection on “Data driven teaching practices,” which I recommend.  Cuban characterizes the hope behind data-driven practice in positive terms:

data-driven instruction–a way of making teaching less subjective, more objective, less experience-based, more scientific. Ultimately, a reform that will make teaching systematic and effective. Standardized test scores, dropout figures, percentages of non-native speakers proficient in English–are collected, disaggregated by ethnicity and school grade, and analyzed. Then with access to data warehouses, staff can obtain electronic packets of student performance data that can be used to make instructional decisions to increase academic performance. Data-driven instruction, advocates say, is scientific and consistent with how successful businesses have used data for decades in making decisions that increased their productivity.

He then asks, “What’s the evidence that this is making a difference?” and reviews a few studies (there do not seem to be very many) that look, not at test scores as a value in themselves, but as tools for instructional improvement.   Despite the espoused aims of the assessment systems, they feel like an external process with which teachers (and others) must comply — they don’t tend to see it as having actual instructional value.  As the volume and variety of data collection has increased, even a school culture with a clear and committed pedagogical vision may have trouble remembering what it values, and “filtering” the “reforms” to harmonize with those values.

Larry concludes:

Thus far, then, not an enviable research record on data-driven (or informed) decision-making either being linked to classroom practices and student outcomes.

Numbers may be facts. Numbers may be objective. Numbers may smell scientific. But numbers have to be interpreted by those who do the daily work of classroom teaching. Data-driven instruction may be a worthwhile reform but as now driving evidence-based educational practice linked to student achievement, rhetoric notwithstanding, it is not there yet.

 

I still want to hear (and I think many people would be glad to hear) from people who are using data to improve actual practice, rather than to drive up test scores (or drive them down, as seems to be the case with some recent assessments).   But I will hope to encourage stories about other ways people are using research — not necessarily testing results — to improve practice.  Stay tuned — but share stories, too. ( There’s about ten thousand of you out there, so there must be 10 or 20 good stories we should hear!)


Blog comments have been archived, commenting is no longer available.
This blog post has 1 comment.

use of data in the ISEP MSP

posted by: Joseph Gardella on 10/18/2015 7:48 am

Hello Brian: always thankful for your getting things off the ground on discussions here.

I think in ISEP we have learned from data every year and made changes. From the perspective of the Teacher PD using interdisciplinary research, we've modified our approach to add some direct case study work for teachers on the evolution of interdisciplinary fields so teachers have some concrete examples. We are measuring their understanding of interdisciplinary STEM education and research and see their weaknesses, even if they are stars in implementation of their work in the classroom.

With respect to data on impacts and participation of middle/high school students, we've made changes in outreach, parent involvement strategies to bring more students into active work in and out of classroom and our informal components during our MSP project (we are in year 5 now).

We've also been working to synthesize the data to present it in ways so that our political and educational partners and leaders understand our impact according to their ideas of urban school reform (no comment on that) as part of a sustainability plan for the future of ISEP. We are trying to get more buyin to expand ISEP in the future.

So, I still am a naive believer in using data all the time, and our leadership team has been focused on that pretty closely, our evaluations from the outside team at Miami of Ohio is well integrated into our annual reporting, meetings with our outside Advisory Committee and then to our recent site visit....

I think if you have a broad and deep partnership that looks to find our weaknesses....then the data is really effective for internal revision of the work....

Outsiders always clamor for data as if we don't have it. When I give them access to our 300 page annual reports and tell them I wam willing (as all the leadership are) to review the data in light of their focus on test scores only from "failing schools" there is a silence. Some then get on our bandwagon, some just ignore us, cause they don't really want data, that might question their preferred solution to "fixing failing schools".

As you know, I am cynical about all aspects of that crap and how it deters us from doing serious long term work but it is the environment (in my case) I've (or we've) chosen to work in...so suck it up Gardella and shut up on that.

Ok, so that's a Sunday morning coffee fueled response. Hope that gives you and others some examples of where we've clearly benefited. However your main point comes through clearly to me for sure.
Joe Gardella