In the era of NCLB and data-driven decision making, those of us on the project-based world often have to face questions as to how we measure what we do. It’s one of the reasons I love Nebraska’s STARS project — because it’s a classroom-based, project-based statewide assessment project, and it’s NCLB compliant.
I have long maintained that the bedrock premise of data-driven decision making is that we use good data. And I believe that the best data we have is the work that the students do.
In the School District of Philadelphia, high school students take benchmark test in English, Math and Science every quarter. They are more formative than summative, and teachers use them to target remediation and schools use them to assess learning across different classes. At SLA, our benchmarks are project-based, with English, History, Science, Math and Spanish all giving culminating benchmark projects every quarter. These projects are assessed on a school-wide rubric that came to fruition after several days of collaboration and discussion in our summer workshop. We came up with five categories that we felt applied across the disciplines that represented, broadly, the kinds of things we would want a student to demonstrate in any project. And by creating a flexible, interdisciplinary rubric, it allows us to look at student data and student work with a common language across the disciplines.
With that, this is what the rubric looks like:
Design | Knowledge | Application | Presentation | Process | |
Exceeds Expectations (18 – 20) |
|||||
Meets Expectations (15 – 17) |
|||||
Approaches Expectations (12 – 14) |
|||||
Does Not Meet Expectations (11 and under) |
And teachers fill in the spaces of the rubric with project / subject specific language so that students know how those five metrics apply to any given project.
And tomorrow, we’re sitting down as a faculty to examine our first benchmarks — I spent some time crunching the numbers so that we can look across the disciplines to see if we see trends. Are we seeing students being better at any one part of the process than another? Within a discipline, what conclusions can we draw from the average scores? What conclusions can we draw about the spread of total scores? How should any of these conclusions guide us as the students finish up their second benchmarks?
A group of us were looking informally at our data earlier, and it’s really interesting to look at what we can — and can’t — conclude by examining our benchmark projects under this kind of microscope. I think we can draw some conclusions about where we are in our process so far, but i do think it’s equally important to remember what conclusions we cannot draw. This, like any one data set, doesn’t tell the whole story of our students’ progress so far. And while data-driven decision making is important in our schools — and even though I believe that our project-based data tells a much richer story than other forms of data — I think we would miss a major part of the story if we assumed that this data was a complete picture of SLA.
So our goal for tomorrow’s staff meeting is to use this data — and the conversations we can have using the data — as a springboard into a larger conversation about our expectations, our progress, and how we are moving forward in our classes in our first year. It’s exciting to use the data to spawn the larger discussion, and it’s exciting to think about where this on-going discussion will lead us.
Discover more from Practical Theory
Subscribe to get the latest posts sent to your email.