This morning Jason Mitchell self-publishedÂ an interesting essay espousing his views on why replication attempts are essentially worthless. At first I was merely interested by the fact that what would obviously become a topic of heated debate was self-published, rather than going through the long slog of a traditional academic medium. Score one for self publication, I suppose. Jason’s argument is essentially that null results don’t yield anything of value and that we should be improving the way science is conducted and reported rather than publicising our nulls. I found particularly interesting his short example listÂ of things that he sees as critical to experimental results which nevertheless go unreported:
These experimental events, and countless more like them, go unreported in our method section for the simple fact that they areÂ part of the shared, tacit know-how of competent researchers in my field; we also fail to report that the experimenters wore clothes and refrained from smoking throughout the session.Â Â Someone without full possession of such know-howâ€”perhaps because he is globally incompetent, or new to science, or even just new to neuroimaging specificallyâ€”could well be expected to bungleÂ one or more of these important, yet unstated, experimental details.
While I don’t agree with the overall logic or conclusion of Jason’s argument (I particularly like Chris Said’s Bayesian response), I do think it raises some important or at least interesting points for discussion. For example, I agree that there is loads of potentiallyÂ important stuff that goes on in the lab, particularly with human subjects and large scanners, that isn’t reported. I’m not sure to what extent that stuff can or should be reported, and I think that’s one of the interesting and under-examined topics in the larger debate. I tend to lean towards the stance that we should report just about anything we can – but of course publication pressures and tacit norms means most of it won’t be published. And probably at least some of it doesn’t need to be? But which things exactly? And how do we go about reporting stuff like how we respond to random participant questions regarding our hypothesis?
To find out, I’d love to see a list of things you can’t or don’t regularly reportÂ using theÂ #methodswedontreportÂ hashtag. Quite a few are starting to show up- most areÂ funny or outright snarky (as seems to be the general mood of the response to Jason’s post), but I think a few are pretty common lab occurrences and are even though provoking in terms of their potentially serious experimental side-effects. Surely we don’t want to report all of these ‘tacit’ skills in our burgeoning method sections; the question is which ones need to be reported, and why are they important in the first place?