The Journal of Negative Results
Lindsay Beyerstein, guest-blogging at Preposterous Universe, writes about the settlement between GlaxoSmithKline and Kate's boss, in a suit that charged GSK with hiding the reports of unfavorable trials. Derek Lowe may well have something to say on this at a later time.
Her comment that "All scientists have some incentive to publish favorable results over unfavorable ones" reminded me of a conversation I had with some colleagues at DAMOP. I want to state up front that I am most definitely not asserting that the misconduct of GSK (which seems to be rather serious, to my outsider's eye) is in any way equivalent to the trivialities I'll be talking about. The one just reminded me of the other.
A bunch of us were sitting around, talking about current progress on various experiments, and somebody noted that one of their students had just spent a couple of months on something that didn't pan out. He went on to say that the whole thing had taught them a lot about the system in question, so it wasn't really wasted time. Everybody else chimed in with similar stories.
One of the central ideas to emerge from the conversation was that, many times, failed experiments turn out to be some of the most useful things you do as a scientist. Often times, you think you have a good idea, and then end up spending several weeks figuring out why, exactly, it didn't work. In the course of working out that explanation, you usually end up with a vastly improved understanding of the system you're trying to work with. On rare occasions, those experiments will point toward new physics, but more often, they just lead to better techniques.
In a way, it's sort of a shame that people can't get credit for these noble failures. We jokingly suggested that there ought to be a journal where you could write this stuff up-- the Journal of Failed Experiments, which would contain descriptions of experiments that seemed like a good idea, and detailed descriptions of why they didn't work. This would provide an outlet for those researchers who keep running into weird technical roadblocks (the EDM search community would probably provide a good third of the articles), as well as an invaluable resource for people who want to start up new experiments.
There's a huge body of technical knowledge of how to build things out there, including innumerable ideas that won't work for one reason or another, but most of it's locked away in the heads of individual researchers. If you've trained in a particular sub-field, you pick a lot of this up, but if you're moving into a new area, you wind up spending a lot of time re-inventing the wheel. Some archive of not only what works (that would be Reviews of Scientific Instruments), but what doesn't work, would be useful for a lot of young researchers. Not to mention providing a lot of extra publications for those trying to make a tenure case...
The best example of this sort of thing in action is probably the indespensible book The Art of Electronics, by Horowitz and Hill. Not only does it provide a comprehensive overview of pretty much any electronic techinque you might care to use, each chapter includes a "Bad Ideas" section, with diagrams of things that you might think would work, but don't. When I have to design a circuit for some purpose, the first thing I do is check the relevant chapter for a refresher on how the components in qustion work, and once I think I know what to do, I check my proposed circuit against the "Bad Ideas" schematics. It's one of the most useful parts of the book.
Let me say again that I'm not saying that the results GSK buried are not the same sort of thing that I'm talking about. I'm talking about results that aren't particularly interesting, save for what they tell you about the technical details of your experiment. Negative results in pharmaceutical trials are a different beast-- those are interesting results in their own right, and not something that should be hidden away.
Posted at 10:00 AM | link | follow-ups |