Old questions, new opportunities in ed prep
In 2007, when I arrived on the campus at Western Oregon University as a newly-minted assistant professor, I kept hearing stories about a remarkable man named Del Schalock. A professor at Western’s Teaching Research Institute for 40 years and a beloved figure on campus, Del had just passed away when I arrived. He was the father of the Teacher Work Sample, a performance assessment and predecessor to the edTPA, which was used in Oregon starting in the 1980’s. He was carrying out studies of teacher effectiveness on student learning in the 1970’s, when I was still in diapers. He was also ahead of his time: as recently as the late 1990s, a flagship teacher preparation journal rejected his work as being “too researchy.”
But his ideas were noticed. When Lee Shulman accepted his Lifetime Achievement award from the American Association of Colleges for Teacher Education in 2008, and humbly described the issues he’d missed out on during his distinguished career, he expressed regret that he hadn’t listened to the guy who so passionately believed that teachers should be held responsible for student learning.
I’m thinking of Del because today Deans for Impact is releasing our policy agenda, laying out what we see as the future for outcomes-based teacher preparation—a goal as simple to state as it is complex to execute. Put simply, our members want data on how their graduates are performing as teachers and how they are affecting student learning. They want to use that information to improve their programs. They are willing to be transparent about what they learn, and to be held accountable for their results.
The complexity comes in the details. Right now, our members are doing all they can to get performance data on their graduates and those graduates’ students, and they are having a hard go of it. Less than a third of our programs can access such data. Some cannot even find out where their graduates are teaching. There is little comparability in data across programs, and even less communication across state lines.
Our first priority, then, is this urgent need for data. Our members are eager to work with state policymakers to ensure that educator preparation programs can access useful, timely data, of the appropriate grain size, for use in both program improvement and accountability. Specifically, that means information on where graduates find jobs, how many stay in teaching, how well their students perform, what kind of professional evaluations they get from their superiors, and feedback from the graduates themselves. As a former educator myself, I also see this as a fairness issue. It hardly seems reasonable to attack educator-preparation programs for their performance while denying them access to the data they need to assess that performance.
The second step of our policy agenda builds on the first: encouraging states to recognize and support teacher-preparation programs that voluntarily embrace data-driven practices and accountability for outcomes. We see this as a chance for states to create the equivalent of “LEED Green Building Certification” for teacher-preparation programs. Not all programs will choose to pursue this route, but those who are willing to step up will reap the benefits. This proposal is made possible by a provision of the recently-passed Every Student Succeeds Act, which creates the option for states to pursue such policies.
Anybody who’s been following the policy and media attention to teacher preparation in recent years knows that change is coming. It’s overdue: teacher educators have been confronting questions of teacher effectiveness and student learning for decades. Now comes a new opportunity to get serious about building the systems that link teacher preparation to performance and, ultimately, to student learning.
I wish I could sit down with Del Schalock today and ask him how we’re doing, but I can’t. What I can do is support the members of Deans for Impact as they map out a strategy for transforming the teaching profession. We hope you’ll join us.
All Blog Posts