Driving out the snake oil in education
Beware cognitive science-y sounding snake oil.
Last month, the Federal Trade Commission announced a major settlement against Lumosity, one of many “brain training” outfits to crop up in the past couple of years (tagline: “We transform science into delightful games”). If you use Facebook and have children, you’ve likely seen advertisements for Lumosity and similar companies. Perhaps you or your children have used their products yourself. Maybe you even found them useful.
These companies have been of personal and professional interest to me because, in the wake of Deans for Impact releasing The Science of Learning, many friends and colleagues have pointed me toward them and suggested “they’re doing what you’re doing!” I understand why they think so — after all, these supposed “brain trainers” sprinkle terms from cognitive science into their marketing, and make overt claims about improving working-memory capacity and cognitive functioning (to name only two examples). The Science of Learning focuses on cognition, these companies want to improve cognition, so we must be aligned, right?
Wrong. Organizations such as Lumosity are not part of the cognitive-science revolution. They are the snake-oil salesmen of our day, promising quick fixes that (at best) are devoid of scientific support and (at worst) often contradict what science we do know about how learning and cognition improve.
Or at least, that was true until the FTC cracked down on Lumosity and others for making false and misleading claims about whether their product offerings are truly backed by science. And while most of the press coverage has focused on the hefty fine that FTC imposed upon Lumosity ($2 million), what most interested me was the terms of the injunction that prohibits Lumosity from making dubious claims about its products.
There are two prohibitions in the court order. The first prevents Lumosity from making specific scientific claims about the value of its products — for example, that they help ward off Alzheimer’s disease — unless it has “hard science” (my term) to back up its claims. Hard science means:
Human clinical testing of such product that is sufficient in quality and quantity, based on standards generally accepted by experts in the relevant field, when considered in light of the entire body of relevant and reliable scientific evidence, to substantiate that the representation is true. Such testing shall be (1) randomized, adequately controlled, and blinded to the maximum extent practicable; and (2) be conducted by researchers qualified by training and experience to conduct such testing.
The second prohibition blocks Lumosity from making gentler claims about the general benefits or efficacy of its products unless they have “soft science” (again, my term) to support them. This means Lumosity must be able to cite “tests, analyses, research, or studies (1) that have been conducted and evaluated in an objective manner by qualified persons; [and] (2) that are generally accepted in the profession to yield accurate and reliable results.”
Strong medicine, and while the court’s order applies only to Lumosity, here’s an interesting thought exercise: What if these prohibitions were expanded to apply to all products and services offered to teachers under the banner of “professional development”? How many would survive an analysis applying these rules of hard and soft science? Might we drive out cognitive science-y snake oil and a great deal of other crap that’s peddled to teachers, parents and kids as being “research based”?
This may not be a complete fantasy. While the FTC’s reach is limited, under the Every Student Succeeds Act, the new federal education law that will go into effect next year, states are nudged in the direction of using “evidence-based” strategies and interventions in support of various education policies. ESSA identify four tiers of evidence: “strong,” “moderate,” “promising” and activities that have a researched-based rationale but lack existing empirical support. The fourth tier may appear to be a loophole, but the law further requires that there must be “ongoing efforts to examine the effects” that any intervention falling into this category is likely to improve student or other relevant outcomes. In other words, put up or shut up.
As Martin West from Harvard recently observed, the emphasis on evidence and outcomes in ESSA “hold the potential to create and provide resources to sustain a new model for decision-making within state education agencies and school districts—a model that benefits students and taxpayers and, over time, enhances our knowledge of what works in education.” I agree, but to make that potential the reality will require a much closer nexus between institutes of higher education (which possess research expertise) and practicing teachers and school leaders. I have heard many faculty in colleges of education decry the largely evidence-free professional development that’s offered to teachers, but complaining about the competition is not a strategy.
We can fix the sorry state of professional development today. As we move into a new era of education policy, academic researchers should work closely with practicing teachers to develop meaningful professional development centered around questions educators struggle with in the classroom. Let’s build a useful evidence base — and drive out the snake oil.