In 2010, Jonah Lehrer wrote a widely-read New Yorker piece called "The Truth Wears Off." It began with a provocative question: "Is there something wrong with the scientific method?"
However, it's not just science that's in trouble. In the wake of Lehrer's recent travails, something seems wrong with science writing, too—big, bold claims seem unable to weather scrutiny. In what follows, I'll treat the problems facing science and science writing as parallel stories.
So, what is the "decline effect"?
According to Lehrer, the phrase was coined in the 1930s when a Duke psychologist thought he had discovered extrasensory perception (E.S.P.). However, his proof—a student able to predict the identity of hidden cards far better than chance—began to "decline" back to the level statistics would predict.
While this was seen as a drop in the student's actual extrasensory powers, we now tend to see the "decline effect" operating on data rather than the phenomena. That is, E.S.P. didn't decline over time—it never existed in the first place. Evidence to the contrary was simply smoothed out by regression to the mean.
This may seem obvious, but it's actually where things get interesting.
Jonathan Schooler—a psychologist at UC-Santa Barbara—believes we can (and must) use science itself to figure out why well-established results are failing the test of replicability. He's proposed an open-access database for all scientific results, not just the flashy, positive results that end up in journals.
To be published in a journal like Nature, it's essential to have a "positive" rather than a "negative" result. Schooler is a bit hazy on the distinction, but Lehrer clarifies it. Journals don't want "null results," especially if they disconfirm "exciting" ideas.To get published, you either need to have your own sexy idea, or at least some "confirming data" for someone else's.
Though this makes a certain amount of sense—why not reward ingenuity?—both Lehrer and Schooler think it blocks the road to inquiry. By encouraging overblown hypotheses and silencing subsequent evidence agains them, we ignore how messy and uncertain "the truth" really is.
Lehrer concludes on a note that's only gotten more interesting since scandal erupted around his own fudged data:
The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. [...] When the experiments are done, we still have to choose what to believe.
You might say the same thing about science writing. A field built on suggestion, hypothesis, and (often, it seems) data-fudging—witness recent events—is not so far removed from the explanations Lehrer and Schooler provide for scientific "decline."
This parallel (almost) surfaced in a follow-up to Lehrer's piece on Radiolab and On the Media. Toward the end of the latter, Jad Abumrad (one of the hosts of the former) confesses to his own (i.e. Radiolab's) possible role in such matters:
The media is biased, and I mean not in the way that people think it is, but it's certainly biased towards tension, it’s biased towards surprise. And so, there might be some kind of bias that leads us all towards a result that is counterintuitive and exciting.
What's going on here? One of the world's premier science journalists is recognizing the sort of pressures he's under to report results in a certain way—which is precisely the sort of thing for which Lehrer faults science publishing.
On the one hand, this is obvious. If Lehrer's right (and he might be) that, in the end, scientists "still have to choose what to believe," then no one will be surprised that journalists (and their readers) do, too.
On the other hand, this is an opportunity for reflection. Taking these similarities seriously might let us see The Strong Programme (which Michael Barany mentioned in a recent comment) in a new way.
|David Bloor (http://easts.dukejournals.org/content/4/3/419/F2.large.jpg)|
In David Bloor's canonical formulation (1976), we should explain knowledge claims causally, impartially, symmetrically, and reflexively. Here, the last two—symmetry and reflexivity—are the most interesting, and we might combine them in the case of Lehrer. If we owe the "decline effect" to publishing patterns, we might explain our own work in the same way.
And this seems to hold. Journalists like Lehrer or Malcolm Gladwell who pitch counterintuitive claims about things like creativity are as much a product of the marketing for trade books (or TED talks, or the New Yorker) as the "decline effect" is a product of journal bias.
In turn, the same must be true of academic (or blog) attention to Lehrer. On this note, a provocative chapter by Winfried Fluck is instructive. Fluck argues that publishing pressures in the professionalized humanities have produced a different sort of decline: up with originality, down with synthetic vision.
While others have seen our capacity to grasp what's going on here as an opportunity to change the course of our work (or at least our methods), I'm less certain if there's a way out of the loop. Some relish Lehrer's point about choosing our beliefs, but the problem—as I see it—is that it produces both regulation and backlash of the sort I'll talk about in my post next week.