Birth of a New School: How Self-Publication can Improve Research

Edit: click here for a PDF version and citable figshare link!

Preface: What follows is my attempt to imagine a radically different future for research publishing. Apologies for any overlooked references – the following is meant to be speculative and purposely walks the line between paper and blog post. Here is to a productive discussion regarding the future of research.

Our current systems of producing, disseminating, and evaluating research could be substantially improved. For-profit publishers enjoy extremely high taxpayer-funded profit margins. Traditional closed-door peer review is creaking under the weight of an exponentially growing knowledge base, delaying important communications and often resulting in seemingly arbitrary publication decisions1–4. Today’s young researchers are frequently dismayed to find their pain-staking work producing quality reviews overlooked or discouraged by journalistic editorial practices. In response, the research community has risen to the challenge of reform, giving birth to an ever expanding multitude of publishing tools: statistical methods to detect p-hacking5, numerous open-source publication models6–8, and innovative platforms for data and knowledge sharing9,10.

While I applaud the arrival and intent of these tools, I suspect that ultimately publication reform must begin with publication culture – with the very way we think of what a publication is and can be. After all, how can we effectively create infrastructure for practices that do not yet exist? Last summer, shortly after igniting #pdftribute, I began to think more and more about the problems confronting the publication of results. After months of conversations with colleagues I am now convinced that real reform will come not in the shape of new tools or infrastructures, but rather in the culture surrounding academic publishing itself. In many ways our current publishing infrastructure is the product of a paper-based society keen to produce lasting artifacts of scholarly research. In parallel, the exponential arrival of networked society has lead to an open-source software community in which knowledge is not a static artifact but rather an ever-expanding living document of intelligent productivity. We must move towards “research 2.0” and beyond11.

From Wikipedia to Github, open-source communities are changing the way knowledge is produced and disseminated. Already this movement has begun reach academia, with researchers across disciplines flocking to social media, blogs, and novel communication infrastructures to create a new movement of post-publication peer review4,12,13. In math and physics, researchers have already embraced self-publication, uploading preprints to the online repository arXiv, with more and more disciplines using the site to archive their research. I believe that the inevitable future of research communication is in this open-source metaphor, in the form of pervasive self-publication of scholarly knowledge. The question is thus not where are we going, but rather how do we prepare for this radical change in publication culture. In asking these questions I would like to imagine what research will look like 10, 15, or even 20 years from today. This post is intended as a first step towards bringing to light specific ideas for how this transition might be facilitated. Rather than this being a prescriptive essay, here I am merely attempting to imagine what that future may look like. I invite you to treat what follows as an ‘open beta’ for these ideas.

Part 1: Why self-publication?

I believe the essential metaphor is within the open-source software community. To this end over the past few months I have  feverishly discussed the merits and risks of self-publishing scholarly knowledge with my colleagues and peers. While at first I was worried many would find the notion of self-publication utterly absurd, I have been astonished at the responses – many have been excitedly optimistic! I was surprised to find that some of my most critical and stoic colleagues have lost so much faith in traditional publication and peer review that they are ready to consider more radical options.

The basic motivation for research self-publication is pretty simple: research papers cannot be properly evaluated without first being read. Now, by evaluation, I don’t mean for the purposes of hiring or grant giving committees. These are essentially financial decisions, e.g. “how do I effectively spend my money without reading the papers of the 200+ applicants for this position?” Such decisions will always rely on heuristics and metrics that must necessarily sacrifice accuracy for efficiency. However, I believe that self-publication culture will provide a finer grain of metrics than ever dreamed of under our current system. By documenting each step of the research process, self-publication and open science can yield rich information that can be mined for increasingly useful impact measures – but more on that later.

When it comes to evaluating research, many admit that there is no substitute for opening up an article and reading its content – regardless of journal. My prediction is, as post-publication peer review gains acceptance, some tenured researcher or brave young scholar will eventually decide to simply self-publish her research directly onto the internet, and when that research goes viral, the resulting deluge of self-publications will be overwhelming. Of course, busy lives require heuristic decisions and it’s arguable that publishers provide this editorial service. While I will address this issue specifically in Part 3, for now I want to point out that growing empirical evidence suggests that our current publisher/impact-based system provides an unreliable heuristic at best14–16. Thus, my essential reason for supporting self-publication is that in the worst-case scenario, self-publications must be accompanied by the disclaimer: “read the contents and decide for yourself.” As self-publishing practices are established, it is easy to imagine that these difficulties will be largely mitigated by self-published peer reviews and novel infrastructures supporting these interactions.

Indeed, with a little imagination we can picture plenty of potential benefits of self-publication to offset the risk that we might read poor papers. Researchers spend exorbitant amounts of their time reviewing, commenting on, and discussing articles – most of that rich content and meta-data is lost under the current system. In documenting the research practice more thoroughly, the ensuing flood of self-published data can support new quantitative metrics of reviewer trust, and be further utlized in the development of rich information about new ideas and data in near real-time. To give just one example, we might calculate how many subsequent citations or retractions a particular reviewer generates, generating a reviewer impact factor and reliability index. The more aspects of research we publish, the greater the data-mining potential. Incentivizing in-depth reviews that add clarity and conceptual content to research, rather than merely knocking down or propping up equally imperfect artifacts, will ultimately improve research quality. By self-publishing well-documented, open-sourced pilot data and accompanying digital reagents (e.g. scripts, stimulus materials, protocols, etc), researchers can get instant feedback from peers, preventing uncounted research dollars from being wasted. Previously closed-door conferences can become live records of new ideas and conceptual developments as they unfold. The metaphor here is research as open-source – an ever evolving, living record of knowledge as it is created.

Now, let’s contrast this model to the current publishing system. Every publisher (including open-access) obliges researchers to adhere to randomly varied formatting constraints, presentation rules, submission and acceptance fees, and review cultures. Researchers perform reviews for free for often publically subsidized work, so that publishers can then turn around and sell the finished product back to those same researchers (and the public) at an exorbitant mark-up. These constraints introduce lengthy delays – ranging from 6+ months in the sciences all the way up to two years in some humanities disciplines. By contrast, how you self-publish your research is entirely up to you – where, when, how, the formatting, and the openness. Put simply, if you could publish your research how and when you wanted, and have it generate the same “impact” as traditional venues, why would you use a publisher at all?

One obvious reason to use publishers is copy-editing, i.e. the creation of pretty manuscripts. Another is the guarantee of high-profile distribution. Indeed, under the current system these are legitimate worries. While it is possible to produce reasonably formatted papers, ideally the creation of an open-source, easy to use copy-editing software is needed to facilitate mainstream self-publication. Innovators like figshare are already leading the way in this area. In the next section, I will try to theorize some different ways in which self-publication can overcome these and other potential limitations, in terms of specific applications and guidelines for maximizing the utility of self-published research. To do so, I will outline a few specific cases with the most potential for self-publication to make a positive impact on research right away, and hopefully illuminate the ‘why’ question a bit further with some concrete examples.

 Part 2: Where to begin self-publishing

What follows is the “how-to” part of this document. I must preface by saying that although I have written so far with researchers across the sciences and humanities in mind, I will now focus primarily on the scientific examples with which I am more experienced.  The transition to self-publication is already happening in the forms of academic tweets, self-archives, and blogs, at a seemingly exponential growth rate. To be clear, I do not believe that the new publication culture will be utopian. As in many human endeavors the usual brandism3, politics, and corruption can be expected to appear in this new culture. Accordingly, the transition is likely to be a bit wild and woolly around the edges. Like any generational culture shift, new practices must first emerge before infrastructures can be put in place to support them. My hope is to contribute to that cultural shift from artifact to process-based research, outlining particularly promising early venues for self-publication. Once these practices become more common, there will be huge opportunities for those ready and willing to step in and provide rich informational architectures to support and enhance self-publication – but for now we can only step into that wild frontier.

In my discussions with others I have identified three particularly promising areas where self-publication is either already contributing or can begin contributing to research. These are: the publication of exploratory pilot-data, post-publication peer reviews, and trial pre-registration. I will cover each in turn, attempting to provide examples and templates where possible. Finally, Part 3 will examine some common concerns with self-publication. In general, I think that successful reforms should resemble existing research practices as much as possible: publication solutions are most effective when they resemble daily practices that are already in place, rather than forcing individuals into novel practices or infrastructures with an unclear time-commitment. A frequent criticism of current solutions such as the comments section on Frontiers, PLOS One, or the newly developed PubPeer, is that they are rarely used by the general academic population. It is reasonable to conclude that this is because already over-worked academics currently see little plausible benefit from contributing to these discussions given the current publishing culture (worse still, they may fear other negative repercussions, discussed in Part 3). Thus a central theme of the following examples is that they attempt to mirror practices in which many academics are already engaged, with complementary incentive structures (e.g. citations).

Example 1: Exploratory Pilot Data 

This previous summer witnessed a fascinating clash of research cultures, with the eruption of intense debate between pre-registration advocates and pre-registration skeptics. I derived some useful insights from both sides of that discussion. Many were concerned about what would happen to exploratory data under these new publication regimes. Indeed, a general worry with existing reform movements is that they appear to emphasize a highly conservative and somewhat cynical “perfect papers” culture. I do not believe in perfect papers – the scientific model is driven by replication and discovery. No paper can ever be 100% flawless – otherwise there would be no reason for further research! Inevitably, some will find ways to cheat the system. Accordingly, reform must incentivize better reporting practices over stricter control, or at least balance between the two extremes.

Exploratory pilot data is an excellent avenue for this. By their very nature such data are not confirmatory – they are exciting in that they do not conform well to prior predictions. Such data benefit from rapid communication and feedback. Imagine an intuition-based project – a side or pet project conducted on the fly for example. The researcher might feel that the project has potential, but also knows that there could be serious flaws. Most journals won’t publish these kinds of data. Under the current system these data are lost, hidden, obscured, or otherwise forgotten.

Compare to a self-publication world: the researcher can upload the data, document all the protocols, make the presentation and analysis scripts open-source, and provide some well-written documentation explaining why she thinks the data are of interest. Some intrepid graduate student might find it, and follow up with a valuable control analysis, pointing out an excellent feature or fatal flaw, which he can then upload as a direct citation to the original data. Both publications are citable, giving credit to originator and reviewer alike. Armed with this new knowledge, the original researcher could now pre-register an altered protocol and conduct a full study on the subject (or alternatively, abandon the project entirely). In this exchange, it is likely that hundreds of hours and research dollars will have been saved. Additionally, the entire process will have been documented, making it both citable and minable for impact metrics. Tools already exist for each of these steps – but largely cultural fears prevent it from happening. How would it be perceived? Would anyone read it? Will someone steal my idea? To better frame these issues, I will now examine a self-publication practice that has already emerged in force.

 Example 2: Post-publication peer review

This is a particularly easy case, precisely because high-profile scholars are already regularly engaged in the practice. As I’ve frequently joked on twitter, we’re rapidly entering an era where publishing in a glam-mag has no impact guarantee if the paper itself isn’t worthwhile – you may as well hang a target on your head for post-publication peer reviewers. However, I want to emphasize the positive benefits and not just the conservative controls. Post-publication peer review (PPPR) has already begun to change the way we view research, with reviewers adding lasting content to papers, enriching the conclusions one can draw, and pointing out novel connections that were not extrapolated upon by the authors themselves. Here I like to draw an analogy to the open source movement, where code (and its documentation) is forkable, versioned, and open to constant revision – never static but always evolving.

Indeed, just last week PubMed launched their new “PubMed Commons” system, an innovative PPPR comment system, whereby any registered person (with at least one paper on PubMed) can leave scientific comments on articles.  Inevitably, the reception on twitter and Facebook mirrored previous attempts to introduce infrastructure-based solutions – mixed excitement followed by a lot of bemused cynicism – bring out the trolls many joked. To wit, a brief scan of the average comment on another platform, PubPeer, revealed a generally (but not entirely) poor level of comment quality. While many comments seem to be on topic, most had little to no formatting and were given with little context. At times comments can seem trollish, pointing out minor flaws as if they render the paper worthless. In many disciplines like my own, few comments could be found at all. This compounds the central problem with PPPR; why would anyone acknowledge such a system if the primary result is poorly formed nitpicking of your research? The essential problem here is again incentive – for reviews to be quality there needs to be incentive. We need a culture of PPPR that values positive and negative comments equally. This is common to both traditional and self-publication practices.

To facilitate easy, incentivized self-publication of comments and PPPRs, my colleague Hauke Hillebrandt and I have attempted to create a simple template that researchers can use to quickly and easily publish these materials. The idea is that by using these templates and uploading them to figshare or similar services, Google Scholar will automatically index them as citations, provide citation alerts to the original authors, and even include the comments in its h-index calculation. This way researchers can begin to get credit for what they are already doing, in an easy to use and familiar format. While the template isn’t quite working yet (oddly enough, Scholar is counting citations from my blog, but not the template), you can take a look at it here and maybe help us figure out why it isn’t working! In the near future we plan to get this working, and will follow-up this post with the full template, ready for you to use.

Example 3: Pre-registration of experimental trials

As my final example, I suggest that for many researchers, self-publication of trial pre-registrations (PR) may be an excellent way to test the waters of PR in a format with a low barrier to entry. Replication attempts are a particularly promising venue for PR, and self-publication of such registrations is a way to quickly move from idea to registration to collection (as in the above pilot data example), while ensuring that credit for the original idea is embedded in the infamously hard to erase memory of the internet.

A few benefits of PR self-publication, rather than relying on for-profit publishers, is that PR templates can be easily open-sourced themselves, allowing various research fields to generate community-based specialized templates adhering to the needs of that field. Self-published PRs, as well as high quality templates, can be cited – incentivizing the creation and dissemination of both. I imagine the rapid emergence of specialized templates within each community, tailored to the needs of that research discipline.

Part 3: Criticism and limitations

Here I will close by considering some common concerns with self-publication:

Quality of data

A natural worry at this point is quality control. How can we be sure that what is published without the seal of peer review isn’t complete hooey? The primary response is that we cannot, just like we cannot be sure that peer reviewed materials are quality without first reading them ourselves. Still, it is for this reason that I tried to suggest a few particularly ripe venues for self-publication of research. The cultural zeitgeist supporting full-blown scholarly self-publication has not yet arrived, but we can already begin to prepare for it. With regards to filtering noise, I argue that by coupling post-publication peer review and social media, quality self-publications will rise to the top. Importantly, this issue points towards flaws in our current publication culture. In many research areas there are effects that are repeatedly published but that few believe, largely due to the presence of biases against null-findings. Self-publication aims to make as much of the research process publicly available as possible, preventing this kind of knowledge from slipping through the editorial cracks and improving our ability to evaluate the veracity of published effects. If such data are reported cleanly and completely, existing quantitative tools can further incorporate them to better estimate the likelihood of p-hacking within a literature. That leads to the next concern – quality of presentation.

Hemingway's thoughts on data.

Quality of presentation

Many ask: how in this brave new world will we separate signal from noise? I am sure that every published researcher already receives at least a few garbage citations a year from obscure places in obscure journals with little relevance to actual article contents. But, so the worry goes, what if we are deluged with a vast array of poorly written, poorly documented, self-published crud. How would we separate the signal from the noise?

 The answer is Content, Presentation, and Clarity. These must be treated as central guidelines for self-publication to be worth anyone’s time. The Internet memesphere has already generated one rule for ranking interest: content rules. Content floats and is upvoted, blogspam sinks and is downvoted. This is already true for published articles – twitter, reddit, facebook, and email circles help us separate the wheat from the chaff at least as much as impact factor if not more. But presentation and clarity are equally important. Poorly conducted research is not shared, or at least is shared with vehemence. Similarly, poorly written self-publications, or poorly documented data/reagents are unlikely to generate positive feedback, much less impact-generating eyeballs. I like to imagine a distant future in which self-publication has given rise to a new generation of well-regarded specialists: reviewers who are prized for their content, presentation, and clarity; coders who produce cleanly documented pipelines; behaviorists producing powerful and easily customized paradigm scripts; and data collection experts who produce the smoothest, cleanest data around. All of these future specialists will be able to garner impact for the things they already do, incentivizing each step of the research processes rather than only the end product.

Being scooped, intellectual credit

Another common concern is “what if my idea/data/pilot is scooped?” I acknowledge that particularly in these early days, the decision to self-publish must be weighted against this possibility. However, I must also point out that in the current system authors must also weight the decision to develop an idea in isolation against the benefits of communicating with peers and colleagues. Both have risks and benefits – an idea or project in isolation can easily over-estimate its own quality or impact. The decision to self-publish must similarly be weighted against the need for feedback. Furthermore, a self-publication culture would allow researchers to move more quickly from project to publication, ensuring that they are readily credited for their work. And again, as research culture continues to evolve, I believe this concern will increasingly fade. It is notoriously difficult to erase information from The Internet (see the “Streisand effect”) – there is no reason why self-published ideas and data cannot generate direct credit for the authors. Indeed, I envision a world in which these contributions can themselves be independently weighted and credited.

 Prevention of cheating, corruption, self-citations

To some, this will be an inevitable point of departure. Without our time-tested guardian of peer review, what is to prevent a flood of outright fabricated data? My response is: what prevents outright fabrication under the current system? To misquote Jeff Goldblum in Jurassic Park, cheaters will always find a way. No matter how much we tighten our grip, there will be those who respond to the pressures of publication by deliberate misconduct. I believe that the current publication system directly incentivizes such behavior by valuing end product over process. By creating incentives for low-barrier post-publication peer review, pre-registration, and rich pilot data publication, researchers are given the opportunity to generate impact for each step of the research process. When faced with the vast penalties of cheating due to a null finding, versus doing one’s best to turn those data into something useful for someone, I suspect most people will choose the honest and less risky option.

 Corruption and self-citations are perhaps a subtler, more sinister factor. In my discussions with colleagues, a frequent concern is that there is nothing to prevent high-impact “rich club” institutions from banding together to provide glossy post-publication reviews, citation farming, or promoting one another’s research to the top of the pile regardless of content. I again answer: how is this any different from our current system? Papers are submitted to an editor who makes a subjective evaluation of the paper’s quality and impact, before sending it to four out of a thousand possible reviewers who will make an obscure  decision about the content of the paper. Sometimes this system works well, but increasingly it does not2. Many have witnessed great papers rejected for political reasons, or poor ones accepted for the same. Lowering the barrier to post-publication peer review means that even when these factors drive a paper to the top, it will be far easier to contextualize that research with a heavy dose of reality. Over time, I believe self-publication will incentivize good research. Cheating will always be a factor – and this new frontier is unlikely to be a utopia. Rather, I hope to contribute to the development of a bridge between our traditional publishing models and a radically advanced not-too-distant future.

Conclusion

Our current systems of producing, disseminating, and evaluating research increasingly seem to be out of step with cultural and technological realities. To take back the research process and bolster the ailing standard of peer-review I believe research will ultimately adopt an open and largely publisher-free model. In my view, these new practices will be entirely complementary to existing solutions including such as the p-curve5, open-source publication models6–8, and innovative platforms for data and knowledge sharing such as PubPeer, PubMed Commons, and figshare9,10. The next step from here will be to produce useable templates for self-publication. You can expect to see a PDF version of this post in the coming weeks as a further example of self-publishing practices. In attempting to build a bridge to the coming technological and social revolution, I hope to inspire others to join in the conversation so that we can improve all aspects of research.

 Acknowledgments

Thanks to Hauke Hillebrandt, Kate Mills, and Francesca Fardo for invaluable discussion, comments, and edits of this work. Many of the ideas developed here were originally inspired by this post envisioning a self-publication future. Thanks also to PubPeer, PeerJ,  figshare, and others in this area for their pioneering work in providing some valuable tools and spaces to begin engaging with self-publication practices.

Addendum

Excellent resources already exist for the many of the ideas presented here. I want to give special notice to researchers who have already begun self-publishing their work either as preprints, archives, or as direct blog posts. Parallel publishing is an attractive transitional option where researchers can prepublish their work for immediate feedback before submitting it to a traditional publisher. Special notice should be given to Zen Faulkes whose excellent pioneering blog posts demonstrated that it is reasonably easy to self-produce well formatted publications. Here are a few pioneering self-published papers you can use as examples – feel free to add your own in the comments:

The distal leg motor neurons of slipper lobsters, Ibacus spp. (Decapoda, Scyllaridae), Zen Faulkes

http://neurodojo.blogspot.dk/2012/09/Ibacus.html

Eklund, Anders (2013): Multivariate fMRI Analysis using Canonical Correlation Analysis instead of Classifiers, Comment on Todd et al. figshare.

http://dx.doi.org/10.6084/m9.figshare.787696

Automated removal of independent components to reduce trial-by-trial variation in event-related potentials, Dorothy Bishop

http://bishoptechbits.blogspot.dk/2011_05_01_archive.html

Deep Impact: Unintended consequences of journal rank

Björn Brembs, Marcus Munafò

http://arxiv.org/abs/1301.3748

A novel platform for open peer to peer review and publication:

http://thewinnower.com/

A platform for open PPPRs:

https://pubpeer.com/

Another PPPR platform:

http://f1000.com/

References

1. Henderson, M. Problems with peer review. BMJ 340, c1409 (2010).

2. Ioannidis, J. P. A. Why Most Published Research Findings Are False. PLoS Med 2, e124 (2005).

3. Peters, D. P. & Ceci, S. J. Peer-review practices of psychological journals: The fate of published articles, submitted again. Behav. Brain Sci. 5, 187 (2010).

4. Hunter, J. Post-publication peer review: opening up scientific conversation. Front. Comput. Neurosci. 6, 63 (2012).

5. Simonsohn, U., Nelson, L. D. & Simmons, J. P. P-Curve: A Key to the File Drawer. (2013). at <http://papers.ssrn.com/abstract=2256237>

6.  MacCallum, C. J. ONE for All: The Next Step for PLoS. PLoS Biol. 4, e401 (2006).

7. Smith, K. A. The frontiers publishing paradigm. Front. Immunol. 3, 1 (2012).

8. Wets, K., Weedon, D. & Velterop, J. Post-publication filtering and evaluation: Faculty of 1000. Learn. Publ. 16, 249–258 (2003).

9. Allen, M. PubPeer – A universal comment and review layer for scholarly papers? | Neuroconscience on WordPress.com. Website/Blog (2013). at <https://neuroconscience.com/2013/01/25/pubpeer-a-universal-comment-and-review-layer-for-scholarly-papers/>

10. Hahnel, M. Exclusive: figshare a new open data project that wants to change the future of scholarly publishing. Impact Soc. Sci. blog (2012). at <http://eprints.lse.ac.uk/51893/1/blogs.lse.ac.uk-Exclusive_figshare_a_new_open_data_project_that_wants_to_change_the_future_of_scholarly_publishing.pdf>

11. Yarkoni, T., Poldrack, R. A., Van Essen, D. C. & Wager, T. D. Cognitive neuroscience 2.0: building a cumulative science of human brain function. Trends Cogn. Sci. 14, 489–496 (2010).

12. Bishop, D. BishopBlog: A gentle introduction to Twitter for the apprehensive academic. Blog/website (2013). at <http://deevybee.blogspot.dk/2011/06/gentle-introduction-to-twitter-for.html>

13. Hadibeenareviewer. Had I Been A Reviewer on WordPress.com. Blog/website (2013). at <http://hadibeenareviewer.wordpress.com/>

14. Tressoldi, P. E., Giofré, D., Sella, F. & Cumming, G. High Impact = High Statistical Standards? Not Necessarily So. PLoS One 8, e56180 (2013).

15.  Brembs, B. & Munafò, M. Deep Impact: Unintended consequences of journal rank. (2013). at <http://arxiv.org/abs/1301.3748>

16.  Eisen, J. A., Maccallum, C. J. & Neylon, C. Expert Failure: Re-evaluating Research Assessment. PLoS Biol. 11, e1001677 (2013).

http://wl.figshare.com/articles/875339/embed?show_title=1

29 thoughts on “Birth of a New School: How Self-Publication can Improve Research

  1. Thanks, I’ve been waiting for this post eagerly.
    The way I see it, your assessment of what it would be like after the transition isn’t disagreeable, but I fear you are underestimating the difficulties, both in terms of transition and of ongoing problems (or degeneration potential).
    But I should start with a clear COI declaration. I am not an active scholar, I work in academia but not in a research position and don’t have plans to change. However, I have an active interest in neuroscience and neuro-philosophy (rooted in my past), and have thought long and hard about whether I should self-publish or not.
    At first sight, I should be the ideal candidate: I don’t need any official recognition, my salary comes from elsewhere and I am only interested in getting my ideas out there. So why not self publish? Because I’ll drown in noise, is the quick answer.
    Back to the main subject, I’ll spell out some of the problems I think you are underestimating. They aren’t meant to dispute the main point (that self-publishing could be good, and better than what we have now), and I will end up with a suggestion (nothing more than my hunch on how to overcome the problems).
    1) Dispersion. Dispersion is big already, especially now that Open Access (OA) is gaining government support; new OA journals are popping out everywhere, and we know that researchers should be careful with their choices. However, at least in the life sciences, there is a massive unifying force. PubMed. If a journal is not indexed in PubMed, it doesn’t exist, if it is there, the articles it publishes can be discovered, and good quality material published in marginal papers still has a chance to emerge, regardless of the author affiliation and of the journal average quality. With Self Publication (SP) dispersion will explode (PubMed can’t be expected to keep up), and without a publicly funded infrastructure it will generate a highly profitable market of Research Visibility Enhancers (similar to Search Engine Optimisation) that are guaranteed to use opaque strategies, at best. This isn’t good.
    2) Massive entry barriers. As hinted for my own case, a predominantly self-publishing landscape will have massive entry-barriers for newcomers. Plus, it will grant powerful advantages to newcomers that can sport powerful affiliations. Yes, this is already true, if you work at MIT, you have much better chances of publishing in high-glam journals, but I fear that dispersion, plus high noise (low-quality self-published stuff will clog the pipelines), will make it impossible for newcomers to emerge. I have some first hand experience on this. My main work is still not published, but I’ve started blogging, and have been busy exploring the method and epistemology of what I want to do. As far as I can tell, it’s serious philosophical stuff, but how can I tell? The answer is that I can’t. If I want to find out what the research community may do with my “work”, the only thing I can do is submit it for peer review. Otherwise, everyone is busy with their own business and since my stuff isn’t a walk in a park, nobody will feel the need to engage (and the fact that it’s there, available to everyone, doesn’t make any difference). That’s fine in my case, I don’t need official recognition (goodness knows if I would like an editor, though), but gives you an indication of how high the barriers would be. The model you are proposing will certainly require newcomers to invest either a lot of time or (worse) a lot of money just to make themselves known, and this isn’t good.
    3) Quality deterioration, paired with unfair advantage of wealth. Editors and reviewers are useful! Their comments are supposed to raise the standard, and crucially, to raise it above the best effort of the original authors. There is no escape for this, so you can predict that those that can afford it will pay external reviewers/editors to ensure that their standard will remain high, while those will little resources won’t be able to follow. So, as it frequently happens, an open unregulated system will help those that are already big in the landscape and forget/destroy the small fish.
    4) The emergence of shady (for-profit) players. All of the points above will create new, more or less visible, profitable markets. In the absence of regulations, and of a level playing field (i.e. a market that is regulated to minimize unfair advantages), obfuscation and subterfuge will be favoured. There will be companies that promise to make your work highly visible, and just pocket the money, and others that would use “google-bombing-like” tricks to provide unfair advantages. True, some of the practices of big publishers are already questionable, but we all know how they are supposed to work, so there is a limit to the misdeeds they can afford.

    I’m sure there are other problems, but this is enough to make my point. Considering the problems that a “free-for-all”, completely open environment may generate (they can be summarized as: big players will gain disproportionate advantages from their pre-established market position), one needs safeguards and maybe even a system for active positive selection of small players. Without this, moving towards a mostly SP environment will guarantee to harm meritocracy. Fine, but how do we get there in the proper way?
    Well, the traditional way to implement these changes is as follows: some forerunners create both the culture and the practice, at the same time… Then they pitch it to a publicly funded, neutral and massive organisation (NCBI and the like). When the massive org accepts the system as a the new standard (or prototype), it will have the power to enforce the rules and to protect the playing field. We know that this is happening already: PubMed Commons (PMC) is a step in the right direction, therefore one way forward is to start using the system, and produce high-quality post-pub reviews. This will show that the concept works. It is selfless-behaviour, to some extent, as the well-know neuro-bloggers could start using PMC and divert there the traffic that would otherwise go to their own blogs.
    The other direction that needs to be explored is this: one could try to organise an open peer review & SP circle, and use it for a few tests. This is in line with what Micah is suggesting, but the aim is different. What you’ll want to achieve is a successful proof of concept, maybe even a working OS software platform that can support the process. When this is done, the circle will need to do two things, disseminate the news, and pitch for wider usage, with the aim of convincing NCBI or similar to adopt the practice. In other words, I don’t believe that an unregulated “organic adoption” will generate a healthy market.
    But of course, how can I know? Never mind, I guess I’ll retire in my corner now. Thanks to Micah for initiating this conversation, I really do hope (despite all my skepticism) it will generate a fruitful discussion.

    • Hi Sergio,

      good points, however:

      1) I think the dispersion issue is limited, or at least won’t get much worse than it is already. I often need philosophy papers and neuroscience papers. The latter are easily found in PubMed, the former not. Already the research field is scattered beyond belief and it’s virtually impossible to track down all papers relevant to what you’re doing. So I don’t think it can get much worse. If anything, self-publication might reduce the number of bogus or slight publications, might reduce the number of “useless” reviews we all have to do, so in the end may produce (a) less to read, and (b) more time to read it.
      I agree with your reservations about Google Scholar: you don’t want to end up finding papers that only correspond to your past search behaviour. Don’t know if Scholar also works this way, but getting hits that are based on your past searches seems to go precisely against what a researcher would want. As it is, Google Scholar and Google Citations are at an almost laughably crude stage in their development I find. But that doesn’t mean they can’t improve, or something better can replace them.

      2) “low-quality self-published stuff will clog the pipelines” – no. At the outset, yes, but in the end a central database like arXiv will contain a massive amount of comments, metrics and whatnot, that in fact will make it MUCH EASIER to see whether a paper is worthwhile, than in the current situation. This in turn will dis-incetivize publishing low quality stuff, because what will it bring you?
      I see what you mean with the entry level. Indeed, you’re new, how do you get noticed? Well, (a) a central arXiv-type database would solve this; (b) even in the current system, whereas your paper is guaranteed to be read by 3 people at least, that is no guarantee whatsoever that it will not sink without a trace aferwards, or that it will be cited at all. So the entry level is very “high” now. You get entry by going to conferences, presenting your stuff, discussing with people etc. Self-publication does not do well to the person writing up his stuff and putting it on the web. But neither does the current system. (see also 3)

      3) I follow you in the sense that you suggest smaller groups, or individuals, are now using the peer-review process as a sort of discussion and feedback forum, to improve their work to an “MIT level”. But the reality is that you often can’t do research alone in a vacuum, as I wrote earlier. Discussion is needed, so perhaps there is a certain size below which the output of a research group suffers from lack of discussion unless they communicate outside the group. This is a natural process. Ok, it might make life easier on the Big Guys, but again, isn’t it already the case now? I think one of Micah’s central points is that now you get the illusion that you’re past the post when a paper is published, but in fact you’re still nowhere.

      4) “There will be companies that promise to make your work highly visible, and just pocket the money, and others that would use “google-bombing-like” tricks to provide unfair advantages” – as Micah writes, you will have that in any system. In fact, the current system is comprised *exclusively* of those: they’re known as publishers🙂 indeed, they promise to get you some visibility, and all you have to do is convince 3 or 4 people, sometimes not of your field. Seems like a bigger scam to me. Also, the number of bogus Open Access journals just in it for the money is also huge in the current system. Apart from that, again I side with you in that Google is not to be trusted at this point, and we need a central PubMed or arXiv like system, to which people have to upload their paper if they want it to be noticed at all.

      Actually, I would like to be even more radical than Micah: I think that in the far future, the notion of “papers” as discrete snapshots of research will go altogether. A “paper” is like a 3-minute song: the consequence of the format in which it originally came more than a century ago. Songs are 3 minutes long because that’s all that could fit on a 78rpm, or 45rpm in the early 20th century. Papers are discrete precisely because they were printed on paper, which was for centuries our only means of communication. So in the end, that will go. Admit it: if someone points to a flaw in your paper, why wouldn’t you edit/rewrite it or do an additional experiment and add it, making the flaw-comment superfluous? This creates a number of additional problems, like the fact that the commentator sees his/her discrete comment contribution nullified, and how do you check for someone’s capacity to generate original research? And there I think the hiring/grants question does stand in the way. But from a science point of view, it’s, as I say, the only way to go.

    • Sergio, it seems to me that most of these corruption and barrier to entry issues you raise, while important, are really endemic to the current publishing culture and infrastructure rather than self-publication specifically.

      You do point out some of the key issues with dispersion and accessibility. Certainly right now I would not recommend that the average entry level scientist or student self-publish their primary projects as these are likely to go unnoticed. That being said I am not quite as sure as you that they will necessarily go unnoticed. I think if those works followed the Content, Clarity, and Presentation guidelines I set out above they are likely to be read by someone just for the novelty of the self-publishing. I know that if I came across any well-done self published work that was relevant to my interests I would tweet it out without hesitation and other leaders in the community would pick it up straight away.

      I’m not denying that without a cogent archiving/searching/tagging system in place there is a danger of being lost in the noise. That is precisely why we design our PPPR template to generate classical Google Scholar citation alerts – most academics pretty obsessively check those. I really want to repeat myself here that I think the number one case for self-publication right now is PPPR. I think anyone is likely to read a PPPR that follows CCP and directly discusses their work.

      Best,
      Micah

  2. Hi Sergio, you say:

    “If a journal is not indexed in PubMed, it doesn’t exist”

    I think it’s more accurate to say, in my experience at least, “If a journal is not indexed by google scholar, it doesn’t exist.” I can’t remember the last time I actually used a database rather than just using google scholar.

    • 🙂 which is my point. My day job is about Systematic Reviews, and the deliberate obscurity of google scholar isn’t helping at all. They are in it for profit, and it shows. You don’t want research to be hostage to shady (unaccountable) profit making machines, isn’t that part of the whole idea?

      • I don’t follow what you mean by “deliberate obscurity of google scholar”. GS is hands down the best search tool for finding papers and also provides nice citation metrics. It’s a free application offered by google without ads. Of course I agree that it would be nice to have an open source alternative to it, but the solution I propose here is not dependent on GS. It’s just showing how GS can be utilised to start supporting self publication straight away (since the majority of academics are already using it). You are setting up a catch-22 – we can’t start self publishing because we don’t have the tools, but we can’t get those tools because no one is self publishing.

        • Sorry for the late reply, busy week!
          On GS: do we know how it ranks results? Not really, we know the general ideas, but not the details, and more importantly, we don’t know if some results receive special treatment and we can be sure we’ll never know.
          Do we know what is indexed? What criteria are followed to decide what is retained and what’s ignored? How about counting citations?
          You wrote:

          oddly enough, Scholar is counting citations from my blog, but not the template

          Odd, isn’t it? Would it be better if you could know why?
          I could continue with the questions, but the whole point is: if you want to have a trustworthy system, you need to know how the system works. GS is great and fantastically handy, but it has to hide its inner workings and that means it pulls science in the opposite direction of where I’d like it to go: I’d like things to be clear, the rules of the game to be known and open to discussion. If you rely on private platforms, it is guaranteed that some things will remain secret. And I’m not even mentioning the enormous power that Google already has to nudge opinion and research by simply tweaking its ranking a little.
          More in general, Yes, I am highlighting the Catch 22 (as others did) and no, I don’t have additional answers, sorry! The ideas I have are written in the two main comments I’ve made. All in all, it is a tough nut to crack, and it will require plenty of people to be courageous enough to start self publishing their main output, in one way or the other.

  3. Micah

    Please take a look at The Winnower if you have not done so already. They seem to have a workable open publication, open post pub review model with incentives.

    • Just discovered it yesterday – will update the addenum to link to them! I still think for real reform to happen we need to focus on practices that don’t require much or any new infrastructure.

      • If the scholarly work is to remain accessible then there will eventually have to be a minimum quasi-permanent infrastructure to store the work and perform the post-pub review functions. I think that eventually – after we get a feel for how to do this well – government will have to step in as a source of stable infrastructure.

        • Yes eventually – my key point is that the sooner we start altering the culture surrounding publication, the sooner we can start putting effective tools in place. Funding, usability, etc are all real problems when the practice is still a niche without widespread acceptance or even awareness. I absolutely believe that once self publication begins to go mainstream, there will be an equal response in terms of emerging norms and infrastructures for archiving and finding articles. In the meantime we need to stop waiting for some tech start up to save us – plenty already exist but it’s up to us to change the way academia publishes.

  4. The main problem with PubPeer is not lack of incentive to write a proper review, but lack of responsibility. Because the reviews are anonymous people focus only on negative sides. In normal peer review process the reviewers misbehaving we’ll be judged by the editor.

    I’m not saying that credit is not important. Quite the opposite: I’m really surprised why scientists review papers for free so publishers could sell them to them a few months later. Luckily things are changing. In addition to PubPeer, PubMed Commons, there is a new platform called publons.com which not only encourages people to share their reviews under their own name, but also make the reviews citable, by assigning a DOI. Go and have a look: publons.com

    • Yes incentive was something I have really become convinced is missing from most existing practices. I think we need to start simple by doing things that are as close as possible to what the mainstream is already doing, and figure out how to incentivize those practices before we create whole new structures. Academics are like cats – very difficult to herd. They simply won’t use anything that smells like more work with no reward because they are already spending all of their time doing just that!

  5. Hi Sergio,

    good points, however:

    1) I think the dispersion issue is limited, or at least won’t get much worse than it is already. I often need philosophy papers and neuroscience papers. The latter are easily found in PubMed, the former not. Already the research field is scattered beyond belief and it’s virtually impossible to track down all papers relevant to what you’re doing. So I don’t think it can get much worse. If anything, self-publication might reduce the number of bogus or slight publications, might reduce the number of “useless” reviews we all have to do, so in the end may produce (a) less to read, and (b) more time to read it.
    I agree with your reservations about Google Scholar: you don’t want to end up finding papers that only correspond to your past search behaviour. Don’t know if Scholar also works this way, but getting hits that are based on your past searches seems to go precisely against what a researcher would want. As it is, Google Scholar and Google Citations are at an almost laughably crude stage in their development I find. But that doesn’t mean they can’t improve, or something better can replace them.

    2) “low-quality self-published stuff will clog the pipelines” – no. At the outset, yes, but in the end a central database like arXiv will contain a massive amount of comments, metrics and whatnot, that in fact will make it MUCH EASIER to see whether a paper is worthwhile, than in the current situation. This in turn will dis-incetivize publishing low quality stuff, because what will it bring you?
    I see what you mean with the entry level. Indeed, you’re new, how do you get noticed? Well, (a) a central arXiv-type database would solve this; (b) even in the current system, whereas your paper is guaranteed to be read by 3 people at least, that is no guarantee whatsoever that it will not sink without a trace aferwards, or that it will be cited at all. So the entry level is very “high” now. You get entry by going to conferences, presenting your stuff, discussing with people etc. Self-publication does not do well to the person writing up his stuff and putting it on the web. But neither does the current system. (see also 3)

    3) I follow you in the sense that you suggest smaller groups, or individuals, are now using the peer-review process as a sort of discussion and feedback forum, to improve their work to an “MIT level”. But the reality is that you often can’t do research alone in a vacuum, as I wrote earlier. Discussion is needed, so perhaps there is a certain size below which the output of a research group suffers from lack of discussion unless they communicate outside the group. This is a natural process. Ok, it might make life easier on the Big Guys, but again, isn’t it already the case now? I think one of Micah’s central points is that now you get the illusion that you’re past the post when a paper is published, but in fact you’re still nowhere.

    4) “There will be companies that promise to make your work highly visible, and just pocket the money, and others that would use “google-bombing-like” tricks to provide unfair advantages” – as Micah writes, you will have that in any system. In fact, the current system is comprised *exclusively* of those: they’re known as publishers🙂 indeed, they promise to get you some visibility, and all you have to do is convince 3 or 4 people, sometimes not of your field. Seems like a bigger scam to me. Also, the number of bogus Open Access journals just in it for the money is also huge in the current system. Apart from that, again I side with you in that Google is not to be trusted at this point, and we need a central PubMed or arXiv like system, to which people have to upload their paper if they want it to be noticed at all.

    Actually, I would like to be even more radical than Micah: I think that in the far future, the notion of “papers” as discrete snapshots of research will go altogether. A “paper” is like a 3-minute song: the consequence of the format in which it originally came more than a century ago. Songs are 3 minutes long because that’s all that could fit on a 78rpm, or 45rpm in the early 20th century. Papers are discrete precisely because they were printed on paper, which was for centuries our only means of communication. So in the end, that will go. Admit it: if someone points to a flaw in your paper, why wouldn’t you edit/rewrite it or do an additional experiment and add it, making the flaw-comment superfluous? This creates a number of additional problems, like the fact that the commentator sees his/her discrete comment contribution nullified, and how do you check for someone’s capacity to generate original research? And there I think the hiring/grants question does stand in the way. But from a science point of view, it’s, as I say, the only way to go.

    • Thanks Bert,
      I don’t think we’re distant in practice, but I’m here for a very specific reason, that has to do with the likely-to-be-different points of view.
      I’ll explain: if you are posting here, it’s almost certain that you have been actively engaged with a wider bunch of people that share your hopes/expectations. This introduces the risk of an enormous insular basis: from within the community, that shares the same hopes/expectations the brave new world will look both possible and unavoidable. If you are not interested in SP, if you have strong reasons to believe it won’t work, or if you are already seeing how to trick the system, you won’t post in here, or participate in the discussions, because you don’t care and/or it will hurt your plans. Insular bias is huge and ubiquitous across the Internet.
      Hence, I’m actively trying to find flaws in the general idea and highlight them.
      Looks like it worked, you say:
      “we need a central PubMed or arXiv like system, to which people have to upload their paper if they want it to be noticed at all”
      which is probably the stronger point I was trying to make, and is in direct disagreement with Micah’s central point that “we don’t need a platform (not yet)”.
      Sure, we need a culture (one could argue that the current system still kind-of-works because there is a strong positive culture that protects from bottomless-degeneration), but the culture needs to reflect itself into a well thought-through platform. If we leave the platform to the market, you can bet your whole wage that it will be an opaque system designed to look clean and hide it’s opaqueness (or make it look benign). This brings me to another strong objection that has to do with the fact that the current system has the wrong kind of incentives, but that would need to be discussed separately (tomorrow, I hope). For now it’s enough to say that if you’re pairing wrong incentives with opaque/closed/proprietary platforms, you won’t gain anything. It may look better, but it will be worse.
      Now your points:
      1) I work on research synthesis. And from this point of view, it is imperative to improve what we have. More research is created every day, and this isn’t paired with equal improvements in:
      -Indexing/Finding Research.
      -Reading, Understanding and Using it.
      This brings to a system that becomes less and less efficient as time passes, with more and more waste. Not good.
      SP could help, but only if paired with strong and fair central platform(s) to support it. It could help immensely, but has to be done right.
      Plus: dispersion leads to a known unknown: if you rely on private indexing services (Google and the like), you know that you are missing something but you don’t know what, how much of it and crucially not even why.
      Point 2) is agreed. If you have a good, strong, transparent and central platform, the problem is solved. Otherwise noise will increase.
      3) No, I don’t think you follow me. There is a difference between the feedback that you can get from your local circles and what you receive from complete strangers. By definition, in the second case you maximise the chances to receive unexpected criticism before publication. This is supposed to help overall efficiency by raising the standard before publication. My points remains: if you can afford to pay for this, you would. And since this is not about change for change sake, we should think about how to get something better than before, not something “just as bad”. Sure, who can will always ripe unfair advantages, but that’s why we should try to stop them, not a reason not to care. OTOH, one can say that PPPR will compensate, and it does resonate with your final point (about living documents).
      4) I fear you are still missing the main concern. Current publishers play it dirty, but there is an inherent limit to how opaque they can be. In this sense, the current drive towards “gold” open access is a big problem because it made publishers able to get away with more shadiness, not less. My point is that we need to design a change that will increase transparency, not create new opportunities for (profitable) opaqueness. We agree on the detail (distrust Google, support publicly funded alternatives, dislike for OA journals of dubious sorts) but I’m not sure we agree for the same reasons.

      Your final point: I can see the same distant future. We agree in our hopes (apart from the worry that ever-moving targets will make research synthesis even more difficult).
      All in all, I hope you get my message: change is dangerous, and it’s a good idea to plan-ahead.

      I hope I’ll be able to write my other strong objection tomorrow. Micah, you’re the landlord here: I trust you’ll let me know if I’m beginning to be too negative and/or unhelpful.

  6. My other point of view.
    For a little while, I thought about writing a similar, but similar only in the intentions.
    My starting point is different. I think that the research environment is profoundly ill (and personally, that’s why I’m happy to be sitting on its edge), and that it needs reform. From this position, new trends like Post Publication Peer Review (PPPR) and Self publishing (SP) may come to the rescue: they are to some extent inevitable (technology enables them, hence they will happen, but we don’t know in what form), and therefore they offer a chance to direct the upcoming change in a direction that could help curing the profound illness. The people who gravitate within Micah’s circle and will read this, mostly share two common features: they are not part of the problem, and are likely to welcome both PPPR and SP. This is why I’m writing here (replicated on my blog – , for maximum exposure, but closed to comments, so not to fragment discussion), I would like to help raising the awareness of the challenges that will come ahead.
    The main difficulty that I face is that many of you will probably disagree with my initial diagnosis, you will probably agree that there are plenty of problems in the scientific community, but would stop short of declaring that research is profoundly broken. Let’s see if I can change your mind (assuming I’m not preaching to the converted).
    Most of you will know this, but the Economist recently published two articles, “How science goes wrong” (http://t.co/CKPmho4mNn) and “Trouble at the lab” (http://t.co/ZemuDRcDEI). I’ll cite the first, because it’s short and to the point:
    “A rule of thumb among biotechnology venture-capitalists is that half of published research cannot be replicated. Even that may be optimistic.” (hint: it IS optimistic)
    “‘Negative results’ now account for only 14% of published papers, down from 30% in 1990.” (I think the source of this claim is here: http://t.co/Odz2lW1ewL – pay-wall alert!)
    This isn’t “far from ideal”, it’s disastrous. Consider this: how does research get funded? From research grants, that need to be backed with published evidence (that is likely to be flawed), and by the PI reputations, again based on peer-reviewed publications, equally likely to be bogus.
    P-hacking (I like to think also of P-fishing) is also a problem as is a general misapplication of statistics. If you are not convinced read this: “Most researchers don’t understand error bars” (http://t.co/euOfs3ENfh). This could be enough, but the ironic reality is that even the explanation given there is wrong (and not marginally!), and the mistake was not uncovered in the comments, I’ve checked. 10 brownie points to the first one to spot it.
    Why does this happen? Mostly, because the current system provides the wrong incentives: one needs to publish or die, no one gains anything from pointless replications, PPPR doesn’t produce any advantage and statistics is hard.
    So, let’s summarise the symptoms (based on the Economist):
    1) 50% or more of what is published is wrong.
    2) A vast majority of negative or null results is never published.
    3) Researches won’t spot clear statistical mistakes, and a vast majority of life-sciences rely heavily on stats.
    4) No one knows about negative results, so may try to test what should already be recognisable as wrong hypotheses.
    5) Grants are assigned by people who don’t spot evident errors, based on ‘evidence’ that is mostly wrong.
    Because of error propagation, one could estimate that 80-90% of research funds are assigned for the wrong reasons, and that’s being generous.
    If this is not a terminally ill patient, I’ve never seen one.
    Convinced? If not, do your homework and make your (evidence backed) case. Otherwise follow me.
    Now, the Economist suggests the following solutions:
    a) Getting to grips with statistics
    b) Research protocols should be registered in advance and monitored in virtual notebooks
    c) Work out how best to encourage replication
    d) Journals should allocate space for “uninteresting” work,
    e) Grant-givers should set aside money to pay for it
    f) Peer review should be tightened – or perhaps dispensed with altogether, in favour of post-publication evaluation in the form of appended comments
    g) Policymakers should ensure that institutions using public money also respect the rules.
    And I’ll add:
    h) For goodness sake, make sure that negative results are published, in one way or the other.
    These are all good suggestions, there may be more, but I guess we can work with these. The game is: can we design new procedures and foster a cultural change that will facilitate the solutions above?
    PPPR can help with a) and f), it’s already happening, so that’s good. What about Self Publishing? Well, one could lead by example and SP their own unsuccessful experiments.
    I would also argue that one should always SP an outline of ongoing experiments, containing the predicted outcome measure. You won’t need to explain any detail, just the working hypothesis to explain what you expect to find. When results are published, you could link to the SP declaration and show that you haven’t fished a significant P (or publish the unsuccessful report).
    Furthermore, to facilitate all of the points, even those that seem hopelessly out of reach (such as g), everyone can promote the necessary cultural changes. In a gazillion of different ways, a few examples follow:
    I. give a seminar about the problem and the possible solutions in your institution. If possible, make sure these things are thought at BA and postgrad levels.
    II. Talk about it. Link to this discussion, at regular intervals(!), ask you peers and PIs for their opinion, foster the right kind of peer pressure.
    III. Reward the right attitude whenever you see it. If you’re evaluating an application (it will happen!) keep this discussion in mind, and share your concerns with the panel.
    IV. If you make an application, make sure you show-off the things you’re doing towards this aim (allow other people to reward the right attitude).
    V. Once in your life, choose a topic a do a Systematic Review on it (even a narrow/small one). You’ll learn a lot, and will maximise the impact of existing research; encourage your junior partners/PhD students to do the same (note, I work on this field, so it’s a clear COI for me). You can always publish a systematic review, it won’t be wasted time.
    VI. Allocate one hour per week to some form of PPPR, make sure your colleagues know you are doing so.
    VII. You fill this one in: there must be other things that can be done.

    As you can see, this post is not about SP in itself, I’m trying to look at the bigger picture, and talk about the culture shift that need to precede and guide the more technical side of SP. My previous point on the need of a unified, not-for-profit platform remains valid, but is secondary to what I’ve written here. In this sense, I’m with Micah: cultural change takes precedence. So this is why I’m making the cultural problem explicit.

  7. Sergio

    I appreciate your description of the large issues plaguing sciences in general and life sciences in particular. Hopefully more openness in science will fix these sorts of problems.

    I would like to add your comments that poor understanding of instrument bias and systematic error is often as big an issue or bigger than statistical issues in life sciences. To see an example of where these types of issues overwhelm statistical issues have a look at the MEG source localization literature and the literature it spawns. Yet, they proceed happily along making great claims and doing intricate analysis on data when these errors are large relative to the intricacies of their results.

    • Hi Zen thanks for your comment!

      While I didn’t explicitly discuss archiving, the central thesis of this post is that we need to address the publishing culture before moving onto infrastructural issues like archiving. So I tried to focus on those issues rather than specific implementations, which I feel will follow more readily once the practices are in place. Personally i’m not to worried as we already see a multitude of archiving services springing up. It’s difficult to convince a library or similar to put up the finances for an archiving system if self-publications only make up a tiny fraction of the overall publishing ecosystem. In the meantime the solution is just to archive anywhere you can with multiple redundancy. Server/hard-disk space is cheaper than ever and with tools like Figshare it should be easy to find a cloud based place to store your manuscript. Most academics also have plenty of leftover server space on their university or personal websites for multiple redundancy. These make for nice interim situations for those of us who want to start publishing PPPRs right away.

      As a side note, I think it would be counter to the spirit of this post to support any single centralized archiving solution. It would be very old-fashioned to support something that could so easily be taken down or later monetized when a cloud or user-based archiving solution would serve far better. Once practices are in place I am sure something like torrent technology could be adapted to create a pervasive distributed archive system.

  8. You write: “After months of conversations with colleagues I am now convinced that real reform will come not in the shape of new tools or infrastructures, but rather in the culture surrounding academic publishing itself.”

    Isn’t this a little bit of a chicken and egg problem? There will be no pre-print deposition without arXiv et al. No change in culture without the supporting infrastructure. And building new infrastructure without demand is tricky, of course. There can be no Wikipedia without wikis.

    “I believe that self-publication culture will provide a finer grain of metrics than ever dreamed of under our current system.” That is pretty much obvious, IMHO.

    In general, I agree with pretty much everything you write, but I’d go a step further: institutions need to encourage and facilitate this form of scholarly communication, by making it so easy, that scientists will refuse to do it any other way. We need the infrastructure such that anybody who can hold a pipette can share their data, software and text descriptions thereof.

    With this infrastructure, we will have the myriads of fine-grained metrics you mention. We will also have numerous layers of quality control and peer-evaluation that not only feed into a scientific reputation system, but also provide tools for effective sort, filter and discovery systems.

    • Thanks Björn, I totally agree that at some point we come to a chicken or the egg kind of impasse. But I feel like in the more general community, outside of those of us who already think this way, there is a kind of feeling of waiting for some crafty engineer to solve all the problems with a technological infrastructure. I am in no way trying to denigrate the solutions that already emerging, more just saying “if we don’t change the way the general scientific community thinks about publications, it’s pretty hard to create (and fund) these solutions”.

  9. […] 首先,科研成果的衡量主要是看发表学术论文的情况。而各种学术期刊则是学术论文公开发表的主要阵地。为什么学术论文非要发表在期刊呢?我想这里面肯定有历史上的原因,比如过去没有互联网,论文只能是通过期刊这些具有传播能力的媒介进行传播。然而随着互联网的发展,通过互联网“自发表“文章已经成为可能,比如arXiv等,都是论文发表途径的革新(关于自发表的讨论,可以看Birth of a New School: How Self-Publication can Improve Research)。不过不管怎样,学术期刊仍然是当代学术论文发表的主流阵地。但是,创办、运营一个学术期刊,需要大量的人力物力,资金从哪里来?别跟我说通过卖期刊赚钱,学术期刊的发行量远远低于流行杂志动辄上百万本的规模,因为学术期刊本身专业性极强、受众极小。你说什么?通过收取论文作者支付的版面费赚钱?很多开源期刊就是这么做的,比如Plos One,论文一旦被录用作者就要支付昂贵的版面费,而Plos One对读者来说却做到了完全免费。穷B博士生想发文章毕业交不起版面费怎么办?那好办,卖身吧!论文作者可以让渡版权给出版商,出版商进而有权决定其他人交多少money才能看这些论文。赚到的钱,一大部分则用来支撑旗下合作的期刊。大部分期刊都是通过这种方式运作的,大的出版商有比如Elsevier,Springer,JSTOR。 […]

  10. Imagine if peers could pre-review your writing and commit to github. (BTW “including such as the p-curve”😉

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s