oligarchy-class-pyramid

Depressing Quotes on Science Overflow – Reputation is the Gateway to Scientific Success

If you haven’t done so yet, go read this new E-life paper on scientific overflow, now. The authors interviewed “20 prominent principal investigators in the US, each with between 20 and 60 years of experience of basic biomedical research”, asking questions about how they view and deal with the exponential increase in scientific publications:

Our questions were grouped into four sections: (1) Have the scientists interviewed observed a decrease in the trustworthiness of science in their professional community and, if so, what are the main factors contributing to these perceptions? (2) How do the increasing concerns about the lack of robustness of scientific research affect trust in research? (3) What concerns do scientists have about science as a system? (4) What steps can be taken to ensure the trustworthiness of scientific research?

Some of the answers offer a strikingly sad view of the current state of the union:

On new open access journals, databases, etc:

There’s this proliferation of journals, a huge number of journals… and I tend not even to pay much attention to the work in some of these journals. (…) And you’re always asked to be an editor of some new journal. (…) I don’t pay much attention to them.

On the role of reputation in assessing scientific rigor and quality:

There are some people that I know to be really rigorous scientists whose work is consistently well done (…). If a paper came from a certain lab then I’m more likely to believe it than another paper that might have come from a different lab whose (…) head might be somebody that I know tends to cut corners, over-blows their conclusions, doesn’t do rigorous experiments, doesn’t appreciate the value of proper controls.

If I know that there’s a very well established laboratory with a great body of substantiated work behind it I think there is a human part of me that is inclined to expect that past quality will always be predicting future quality I think it’s a normal human thing. I try not to let that knee–jerk reaction be too strong though.

If I don’t know the authors then I will have to look more carefully at the data and (…) evaluate whether (…) I feel that the experiments were done the way I would have done them and whether there were some, if there are glaring omissions that then cast out the results (…) I mean [if] I don’t know anything I’ve never met the person or I don’t know their background, I don’t know where they trained (…) I’ve never had a discussion with them about science so I’ve never had an opportunity to gauge their level of rigour…

Another interviewee expressed scepticism about the rapid proliferation of new journals:

The journal that [a paper] is published in does make a difference to me, … I’m talking about (…) an open access journal that was started one year ago… along with five hundred other journals, (…) literally five hundred other journals, and that’s where it’s published, I have doubts about the quality of the peer review.

The cancer eating science is plain to see. If you don’t know the right people, your science is going to be viewed less favorably. If you don’t publish in the right journals, i’m not going to trust your science. It’s a massive self-feeding circle of power. The big rich labs will continue to get bigger and richer as their papers and grant applications will be treated preferentially. This massive mess of heuristic biases is turning academia into a straight up pyramid scheme. Of course, this is but a small sub-sample of the scientific community, but I can’t help but feel like these views represent a widespread opinion among the ‘old guard’ of science. Anecdotally these comments certainly mirror some of my own experiences. I’m curious to hear what others think.

6 thoughts on “Depressing Quotes on Science Overflow – Reputation is the Gateway to Scientific Success

  1. I sympathise with your concerns, but isn’t there a degree to which we always have to put some trust in other people’s work (in extremis, even completely open science won’t be immune to outright faking)? While trust remains important, so must reputation.

    I wrote something about the historical perspective on this in science here http://idiolect.org.uk/notes/?p=1543

    • It is true that any collective endeavor will necessarily involve some degree of trust and reputation management. However, what makes a good metric for reputation in science? Should we trust a gut feeling, based on the extremely limited sampling of face-to-face meetings? Or should we trust objective measures? If so which ones? JIF? What about retraction index or the p-curve reliability based index? Grant funding? Each of these has some trade offs, but I think familiarity and heuristic based subjective feelings are the most likely to result in corrupt boys clubs.

  2. While I understand your point … it would be exceeding naive to not be skeptical of the quality of peer review of many of those “500 new journals” until there’s some track record to evaluate.

    I do at least some background research on the regular invitations I get to contribute or review for previously-unheard of journals, and the result is often a giant red flag!

    • For sure, predatory journals are a massive problem. And ‘data repository’ style OA journals like PLOS ONE and Frontiers require a different reviewing approach altogether. I think the flipside is that all too often journal Impact Factor is taken as a proxy for prestige. I agree that that specific comment was probably more indicating the endless variety of hyper-specialist and predatory journals. Still i’m leary of the idea that beyond predatory journals we should be skeptical of a paper based on place of publishing. Should we discount new journals just because they are new? In the end it is the same theme again and again; what heuristics can we rely on to shortcut the overflow and make snap quality decisions? I think reputation is probably the worst of a lot of bad options.

      If it were up to me, I’d get rid of journals all together in favor of data repositories. But that is the topic of another post:

      https://neuroconscience.com/2015/03/20/short-post-my-science-fiction-vision-of-how-science-could-work-in-the-future/

  3. The big problem – currently unaddressed, in my personal view – is “how do you make it all work better”? There is simply too much material coming out very day for a normal scientist to make objective value-judgements. You’re painting it as wickedness – as a self-serving cabal running science – and specific to the old guard. But I suspect it’s an inevitable economic consequence of the amount of science and the assessors’ (governments, granting agencies, employers) that you can only get funded if you stand out.

    I used to agree with “neuroconscience” above that data repositories were the way forward. But having seen the way the Twitter hive mind prefers a flashy, blatant lie to a nuanced truth, I don’t any more. Unless we also say “when we make hiring/firing decisions, we’re not going to look at where you publish, or how many people cited it, or how many times it was favourited or reported in the press – we be carefully consider the value of each discovery in a vacuum”. Which is not going to happen.

    But enough from me – what would you like to see done differently? Strictly practically speaking?

  4. A marginally OT remark.
    Have you stumbled on Meta-research: Evaluation and Improvement of Research Methods and Practices (Ioannidis et al.)?

    One of the take-home messages for me was that yes, lots of people are thinking, planning and actively trying to improve the status quo, but most efforts are siloed by discipline and/or other cultural barriers. The article may be a little self serving, in that it can be seen as promoting the Meta-Research Innovation Center at Stanford, but the general idea is valid: efforts should be coordinated and properly assessed as done for other policy/social science interventions.
    The fear of creating another old boy’s network of meta-researchers lurks in the background, while Ioannidis’ important reputation becomes a somewhat paradoxical early warning – you can’t eliminate always-on heuristic biases…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s