Supporting the Science of Learning
When I was a kid, my mom baked a lot – cookies, cakes, muffins, all kinds of yummy treats. And she usually used her own recipes, written by hand on index cards. The index cards were marked with scratched-out lines and re-writes. Through trial and error, and with a thorough knowledge of the ingredients and the desired results, my mom perfected her recipes.
I can attest that this system worked very, very well.
Now, imagine what would have happened if my mom never had the chance to see the results of her recipes. Without tasting the cakes and cookies, she would have no way of knowing if the ingredients and processes she used were producing what she intended. All she could have done was keep on baking, using those same recipes, and hoping that what she was turning out was right.
Right now, most educator-preparation programs have a fairly standard recipe for educating future teachers: foundations of education, learning theory, general pedagogy, content pedagogy, teaching diverse learners, clinical experience, and perhaps some instruction in special education or education law. But does this produce teachers who are ready, on their first day, to teach to their best potential, and bring out the best potential of their students?
We just don’t know.
Educator-preparation programs are like cooks who never get to see or taste the cakes they bake. They can’t get data on how their graduates are performing in classrooms. Some of them can’t even get data on where their graduates are teaching.
In September, Deans for Impact released The Science of Learning, a report that summarizes what we know about how students learn and what that knowledge means for teachers. To stick with my baking analogy, we’ve compiled the ingredients and written a recipe teacher-educators can use to educate future teachers on how students learn. We’re also piloting the implementation of this “recipe” at several Deans for Impact member-led programs through our Design for Practice Network, which aims to build teacher-candidate mastery of cognitive-science principles for learning.
More recently, Deans for Impact released From Chaos to Coherence, a policy brief in which we call for greater access to data on the performance of graduates of educator-preparation programs, and the performance of those graduates’ students. We also call for new program approval routes based on data access and data use for continuous improvement.
So what do these two things have to do with each other? How can changes in policy support the work of The Science of Learning?
Well, until we get to a place where educator-preparation programs can see their results, can really know how their graduates are doing in classrooms and how those performances relate to practices in educator preparation, we’re stuck.
The policy implications of The Science of Learning are straightforward: to understand how The Science of Learning affects teaching and student learning, educator-preparation programs need data on how their graduates, and their graduates’ students, are performing. Educator-preparation programs, in other words, need to see how the cake comes out so they can tell if they have the right recipe.
Policy cannot produce great educator-preparation programs, any more than a well-stocked, high-end kitchen can produce great cooks. What policy can do is create the conditions that allow educator-preparation programs to do their best to align their work with the best results in classrooms. For this to happen, we need the data, and we need it now. The fact that educator-preparation programs cannot assess the effectiveness of their graduates hamstrings their efforts to improve.
The Science of Learning is based on solid research, but questions remain: how does research on learning translate to better teaching, and to improved student learning? Absent the data, it’s pretty hard to answer these questions.
Data should be available to correlate educator-preparation practices with teacher performance. If states are not currently providing this data, they should make it a priority to do so, particularly given the importance of teacher quality for student learning. They should also recognize programs who are accessing and using this data to get better at educating teachers.
If you can’t taste the cake, how can you know if the recipe is right? Sounds simple…but for some reason, we still haven’t figured it out.