Note: The version below is altered from the original, which was near-gibberish in a few spots. Why? Because I mistakenly posted a pre-edit version that contained the raw ‘transcription’ from voice-recognition software I’ve been trying out. (I suppose it could have been a lot worse.)
Here, more or less as I meant it to appear:
Kevin Dunbar is a researcher who studies how scientists study things — how they fail and succeed. In the early 1990s, he began an unprecedented research project: observing four biochemistry labs at Stanford University. Philosophers have long theorized about how science happens, but Dunbar wanted to get beyond theory. He wasn’t satisfied with abstract models of the scientific method — that seven-step process we teach schoolkids before the science fair — or the dogmatic faith scientists place in logic and objectivity. Dunbar knew that scientists often don’t think the way the textbooks say they are supposed to. He suspected that all those philosophers of science — from Aristotle to Karl Popper — had missed something important about what goes on in the lab. (As Richard Feynman famously quipped, “Philosophy of science is about as useful to scientists as ornithology is to birds.”) So Dunbar decided to launch an “in vivo” investigation, attempting to learn from the messiness of real experiments.
He ended up spending the next year staring at postdocs and test tubes: The researchers were his flock, and he was the ornithologist. Dunbar brought tape recorders into meeting rooms and loitered in the hallway; he read grant proposals and the rough drafts of papers; he peeked at notebooks, attended lab meetings, and videotaped interview after interview. He spent four years analyzing the data. “I’m not sure I appreciated what I was getting myself into,” Dunbar says. “I asked for complete access, and I got it. But there was just so much to keep track of.”
Dunbar came away from his in vivo studies with an unsettling insight: Science is a deeply frustrating pursuit. Although the researchers were mostly using established techniques, more than 50 percent of their data was unexpected. (In some labs, the figure exceeded 75 percent.) “The scientists had these elaborate theories about what was supposed to happen,” Dunbar says. “But the results kept contradicting their theories. It wasn’t uncommon for someone to spend a month on a project and then just discard all their data because the data didn’t make sense.” Perhaps they hoped to see a specific protein but it wasn’t there. Or maybe their DNA sample showed the presence of an aberrant gene. The details always changed, but the story remained the same: The scientists were looking for X, but they found Y.
This Wired story from Jonah Lehrer examines something that too often goes unexamined: The practice of science is often quite messy. This puts in on par with many other serious endeavors: You plan your work, then try to work your plan. But no matter how sound your plan, unexpected events will often force you off course — and sometimes to different destinations altogether.