hamster-wheel

A day in the life of micah

I’ve got a few minutes before I head to this event at the Wellcome Trust, and I can’t be bothered to stare at the paper i’ve been working on all day any longer. On this front things have been great lately, but also kind of strange. I’m writing this post mostly out of boredom, but also because I think it is fun sometimes to share the lifestyles that are familiar to most academics. Since starting my post-doc at UCL some ~2 years ago, i’ve been working in a massively parallel mode. This means I spent a long time coding experiments, then a long time running them, then a long time analyzing, and now finally a very long time writing them all up. The upshot is that this approach results in a lot of synergy; write a function for one experiment and it mostly works for all the rest. Get a nice bit of story telling going in one paper, and it mostly generalizes. And so on. The down shot is you spend a hell of a lot of time doing the same thing every day, and this can get a bit repetitive. Currently I’ve got 5 papers submitted or in revision, and another 2 due to be submitted this summer or fall. When expanding to include my collaborations – something UCL is truly awesome for generating – I’m sitting on about 10-12 papers all nearing or in submission. What this means is that right now I am effectively a writing machine, powered by self-recrimination, caffeine, and chocolate. When i’m not writing, i’m thinking or talking about it. Pretty much everything I do in some way serves the purpose of either getting words on paper, editing them, or getting new words in my mind. Sometimes I start getting a little loopy; the clusterfuck of insanity that is the 2016 political arena certainly doesn’t help.

What this means is that basically, I don’t have very much time to do things I love like blogging and generating engaging content on social media. Many of you are probably pretty sick of my political posts by now, but the truth is I don’t have time for much else. Most of my digital ‘free time’ occurs during the commute, bathroom breaks, meals, or other random downtime. I thought it would be funny to try and actually give my readers a feeling for how a typical day goes for me, so here goes:

~8am – wake up – I’m a slave to wake/sleep routines so I wake up at 8 regardless what I did the day before and  without an alarm. Yay, superchiasmatic nucleus! Usually I wake up thinking about whatever I was writing the day before.

9am – breakfast (cottage cheese or a low-carb avocado smoothie) and espresso. Despair over the latest political news. Surf my twitter neuroscience list. Catch up on emails and think about what I need to do for the day.

10:00 am – leave house, tweet from tube, read RSS, start booting up the old brain and thinking what I should write about today

10:30am – arrive at work. Greet colleagues, skim twitter and facebook, post a link or two. Manage WTCN twitter account if any new events/pubs. Hit first pomodoro.

10:30 – 12:30 – least productive part of the day. Mess around with a random analysis, answer emails, procrastinate writing. Look at the current paper and quickly look away. Drink tea and fiddle with my phone. Try to stay off social media, sometimes fail. Skim a paper or three.

12:30 – 1:30 Lunch with my wife and colleague Francesca. Usually she tells me all the awesome stuff she’s done so far, and I complain about not working productively yet. We discuss papers or I surf twitter.

1:30 – 3:00 on a good day, now some real writing begins. On a bad day, stare at the page, read more papers, tweet too much, curse my brain. Pace around the WTCN thinking about the paper. Sometimes live tweet a paper if desperate.

3:00 dark chocolate break and walk around the park with Francesca. This is essential. We talk about whatever has been plaguing us, walk around Russell Square, and have some really excellent theoretical discussions while I scarf down too much dark chocolate. Leave me alone, we’re all addicted to something.

4:30 – 8:00 – golden time. Everyone leaves the office, so Francesca and I take over a meeting room and I drink a double espresso. We story board/mind map whatever we’ve been struggling with. Serious writing happens. I get into flow mode and jam our a few paragraphs, edit them manically, struggle through flights of fancy and dismay. Usually this is were the good work really happens.

8:00 dinner, review writing, start tweeting. Head home and read more political news. Read the day’s writing on my phone from the tube. Surf twitter until 12. Try not to think about the paper. Fall asleep reading the news/reddit.

Wake up thinking about the paper.

On Tuesday and Thursdays I go climbing from 8-10. On Saturdays I go to the Jolly Butchers to drink awesome craft ales and do some shopping in Stoke Newington. These are essential for sanity! 

 

 

Spoutwood drum circle_Painting QueenBlog

fMRI study of Shamans tripping out to phat drumbeats

Every now and then, i’m browsing RSS on the tube commute and come across a study that makes me laugh out loud. This of course results in me receiving lots of ‘tuts’ from my co-commuters. Anyhow, the latest such entry to the world of cognitive neuroscience is a study examining brain response to drum beats in shamanic practitioners. Michael Hove and colleagues of the Max Planck Institute in Leipzig set out to study “Perceptual Decoupling During an Absorptive State of Consciousness” using functional magnetic resonance imaging (fMRI). What exactly does that mean? Apparently: looking at how brain connectivity in ‘experienced shamanic practitioners’ changes when they listen to  rhythmic drumming. Hove and colleagues explain that across a variety of cultures, ‘quasi-isochronous drumming’ is used to induce ‘trance states’. If you’ve ever been dancing around a drum circle in the full moon light, or tranced out to shpongle in your living room, I guess you get the feeling right?

Anyway, Hove et al recruited 15 participants who were trained in  “core shamanism,” described as:

“a system of techniques developed and codified by Michael Harner (1990) based on cross-cultural commonalities among shamanic traditions. Participants were recruited through the German-language newsletter of the Foundation of Shamanic Studies and by word of mouth.”

They then played these participants rhythmic isochronous drumming (trance condition) versus drumming with a more regular timing. In what might be the greatest use of a Likert scale of all time, Participants rated if [they] “would describe your experience as a deep shamanic journey?” (1 = not at all; 7 = very much so)”, and indeed described the trance condition as more well, trancey. Hove and colleagues then used a fairly standard connectivity analysis, examining eigenvector centrality differences between the two drumming conditions, as well as seed-based functional connectivity:

trance.PNG

seed.PNG

Hove et al report that compared to the non-trance conditions, the posterior/dorsal cingulate, insula, and auditory brainstem regions become more ‘hublike’, as indicated by a higher overall degree centrality of these regions. Further, they experienced stronger functionally connectivity with the posterior cingulate cortex. I’ll let Hove and colleagues explain what to make of this:

“In sum, shamanic trance involved cooperation of brain networks associated with internal thought and cognitive control, as well as a dampening of sensory processing. This network configuration could enable an extended internal train of thought wherein integration and moments of insight can occur. Previous neuroscience work on trance is scant, but these results indicate that successful induction of a shamanic trance involves a reconfiguration of connectivity between brain regions that is consistent across individuals and thus cannot be dismissed as an empty ritual.”

Ultimately the authors conclusion seems to be that these brain connectivity differences show that, if nothing else, something must be ‘really going on’ in shamanic states. To be honest, i’m not really sure anyone disagreed with that to begin with. Collectively I can’t critique this study without thinking of early (and ongoing) meditation research, where esoteric monks are placed in scanners to show that ‘something really is going on’ in meditation. This argument to me seems to rely on a folk-psychological misunderstanding of how the brain works. Even in placebo conditioning, a typical example of a ‘mental effect’, we know of course that changes in the brain are responsible. Every experience (regardless how complex) has some neural correlate. The trick is to relate these neural factors to behavioral ones in a way that actually advances our understanding of the mechanisms and experiences that generate them. The difficulty with these kinds of studies is that all we can do is perform reverse inference to try and interpret what is going on; the authors conclusion about changes in sensory processing is a clear example of this. What do changes in brain activity actually tell us about trance (and other esoteric) states ? Certainly they don’t reveal any particular mechanism or phenomenological quality, without being coupled to some meaningful understanding of the states themselves. As a clear example, we’re surely pushing reductionism to its limit by asking participants to rate a self-described transcendent state using a unidirectional likert scale? The authors do cite Francisco Varela (a pioneer of neurophenemonological methods), but don’t seem to further consider these limitations or possible future directions.

Overall, I don’t want to seem overly critical of this amusing study. Certainly shamanic traditions are a deeply important part of human cultural history, and understanding how they impact us emotionally, cognitively, and neurologically is a valuable goal. For what amounts to a small pilot study, the protocols seem fairly standard from a neuroscience standpoint. I’m less certain about who these ‘shamans’ actually are, in terms of what their practice actually constitutes, or how to think about the supposed ‘trance states’, but I suppose ‘something interesting’ was definitely going on. The trick is knowing exactly what that ‘something’ is.

Future studies might thus benefit from a better direct characterization of esoteric states and the cultural practices that generate them, perhaps through collaboration with an anthropologist and/or the application of phenemonological and psychophysical methods. For now however, i’ll just have to head to my local drum circle and vibe out the answers to these questions.

Hove MJ, Stelzer J, Nierhaus T, Thiel SD, Gundlach C, Margulies DS, Van Dijk KRA, Turner R, Keller PE, Merker B (2016) Brain Network Reconfiguration and Perceptual Decoupling During an Absorptive State of Consciousness. Cerebral Cortex 26:3116–3124.

 

1-s2.0-S0197458014002000-gr2

Mapping the effects of age on brain iron, myelination, and macromolecules – with data!

The structure, function, and connectivity of the brain changes considerably as we age1–4. Recent advances in MRI physics and neuroimaging have led to the development of new techniques which allow researchers to map quantitative parameters sensitive to key histological brain factors such as iron and myelination5–7. These quantitative techniques reveal the microstructure of the brain by leveraging our knowledge about how different tissue types respond to specialized MRI-sequences, in a fashion similar to diffusion-tensor imaging, combined with biophysical modelling. Here at the Wellcome Trust Centre for Neuroimaging, our physicists and methods specialists have teamed up to push these methods to their limit, delivering sub-millimetre, whole-brain acquisition techniques that can be completed in less than 30 minutes. By combining advanced biophysical modelling with specialized image co-registration, segmentation, and normalization routines in a process known as ‘voxel-based quantification’ (VBQ), these methods allow us to image key markers of histological brain factors. Here is a quick description of the method from a primer at our centre’s website:

Anatomical MR imaging has not only become a cornerstone in clinical diagnosis but also in neuroscience research. The great majority of anatomical studies rely on T1-weighted images for morphometric analysis of local gray matter volume using voxel-based morphometry (VBM). VBM provides insight into macroscopic volume changes that may highlight differences between groups; be associated with pathology or be indicative of plasticity. A complimentary approach that has sensitivity to tissue microstructure is high resolution quantitative imaging. Whereas in T1-weighted images the signal intensity is in arbitrary units and cannot be compared across sites or even scanning sessions, quantitative imaging can provide neuroimaging biomarkers for myelination, water and iron levels that are absolute measures comparable across imaging sites and time points.

These biomarkers are particularly important for understanding aging, development, and neurodegeneration throughout the lifespan. Iron in particular is critical for the healthy development and maintenance of neurons, where it is used to drive ATP in glial support cells to create and maintain the myelin sheaths that are critical for neural function. Nutritional iron deficiency during foetal, childhood, or even adolescent development is linked to impaired memory and learning, and altered hippocampal function and structure8,9. Although iron homeostasis in the brain is hugely complex and poorly understood, we know that run-away iron in the brain is a key factor in degenerative diseases like Alzheimer’s and Parkinson’s10–16. Data from both neuroimaging and post-mortem studies indicate that brain iron increases throughout the lifespan, particular in structures rich in neuromelanin such as the basal ganglia, caudate, and hippocampus. In Alzheimer’s and Parkinson’s for example, it is thought that runaway iron in these structures eventually overwhelms the glial systems responsible for chelating (processing) iron, and as iron becomes neurotoxic at excessive levels, leading to a runaway chain of neural atrophy throughout the brain. Although we don’t know how this process begins (scientist believe factors including stress and disease-related neuroinflammation, normal aging processes, and genetics all probably contribute), understanding how iron and myelination change over the lifespan is a crucial step towards understanding these diseases. Furthermore, because VBQ provides quantitative markers, data can be pooled and compared across research centres.

Recently I’ve been doing a lot of work with VBQ, examining for example how individual differences in metacognition and empathy relate to brain microstructure. One thing we were interested in doing with our data was examining if we could follow-up on previous work from our centre showing wide-spread age-related changes in iron and myelination. This was a pretty easy analysis to do using our 59 subjects, so I quickly put together a standard multiple regression model including age, gender, and total intracranial volume. Below are the maps for magnetization transfer (MT),  longitudinal  relaxation  rate (R1),  and  effective  transverse relaxation rate (R2*), which measure brain macromolecules/water, myelination, and iron respectively (click each image to see explore the map in neurovault!). All maps are FWE-cluster corrected, adjusting for non-sphericity, at a p < 0.001 inclusion threshold.

 

Effect of aging on MT
Effect of aging on MT

You can see that there is increased MT throughout the brain, particularly in the amygdala, post central gyrus, thalamus, and other midbrain and prefrontal areas. MT (roughly) measures water in the brain, and is mostly sensitive to myelination and macromolecules such as microglia and astrocytes. Interestingly our findings here contrast to Callaghan et al (2014), who found decreases in myelination whereas we find increases. This is probably explained by differences in our samples.

 

Effect of aging on R1
Effect of aging on R1

R1 shows much more restricted effects, with increased R1 only in the left post-central gyrus, at least in this sample. This is in contrast to Callaghan et al2  who found extensive negative MT & R1 effects, but that was in a much larger sample and with a much wider age-related variation (19-75, mean = 45). Interestingly, Martina and colleagues actually reported widespread decreases in R1, whereas we find no decreases and instead slight increases in both MT and R1. This may imply a U-shaped response of myelin to aging, which would fit with previous structural studies.

Our iron-sensitive map (R2*) somewhat reproduces their effects however, with significant increases in the hippocampus, posterior cingulate, caudate, and other dopamine-rich midbrain structures:

 

Effect of aging on R2*
Effect of aging on R2*

Wow! What really strikes me about this is that we can find age-related increases in a very young sample of mostly UCL students. Iron is already accumulating in the range from 18-39. For comparison, here are the key findings from Martina’s paper:

 

1-s2.0-S0197458014002000-gr2
From Callaghan et al, 2014. Increasing iron in green, decreasing myelin in red.

 

The age effects in left hippocampus are particularly interesting as we found iron and myelination in this area related to these participant’s metacognitive ability, while controlling for age. Could this early life iron accumulation be a predictive biomarker for the possibility to develop neurodegenerative disease later in life? I think so. Large sample prospective imaging could really open up this question; does anyone know if UK Biobank will collect this kind of data? UK biobank will eventually contain ~200k scans with full medical workups and follow-ups. In a discussion with Karla Miller on facebook she mentioned there may be some low-resolution R2* images in that data. It could really be a big step forward to ask whether the first time-point predicts clinical outcome; ultimately early-life iron accumulation could be a key biomarker for neuro-degeneration.

 


References

  1. Gogtay, N. & Thompson, P. M. Mapping gray matter development: implications for typical development and vulnerability to psychopathology. Brain Cogn. 72, 6–15 (2010).
  2. Callaghan, M. F. et al. Widespread age-related differences in the human brain microstructure revealed by quantitative magnetic resonance imaging. Neurobiol. Aging 35, 1862–1872 (2014).
  3. Sala-Llonch, R., Bartrés-Faz, D. & Junqué, C. Reorganization of brain networks in aging: a review of functional connectivity studies. Front. Psychol. 6, 663 (2015).
  4. Sugiura, M. Functional neuroimaging of normal aging: Declining brain, adapting brain. Ageing Res. Rev. (2016). doi:10.1016/j.arr.2016.02.006
  5. Weiskopf, N., Mohammadi, S., Lutti, A. & Callaghan, M. F. Advances in MRI-based computational neuroanatomy: from morphometry to in-vivo histology. Curr. Opin. Neurol. 28, 313–322 (2015).
  6. Callaghan, M. F., Helms, G., Lutti, A., Mohammadi, S. & Weiskopf, N. A general linear relaxometry model of R1 using imaging data. Magn. Reson. Med. 73, 1309–1314 (2015).
  7. Mohammadi, S. et al. Whole-Brain In-vivo Measurements of the Axonal G-Ratio in a Group of 37 Healthy Volunteers. Front. Neurosci. 9, (2015).
  8. Carlson, E. S. et al. Iron Is Essential for Neuron Development and Memory Function in Mouse Hippocampus. J. Nutr. 139, 672–679 (2009).
  9. Georgieff, M. K. The role of iron in neurodevelopment: fetal iron deficiency and the developing hippocampus. Biochem. Soc. Trans. 36, 1267–1271 (2008).
  10. Castellani, R. J. et al. Iron: The Redox-active Center of Oxidative Stress in Alzheimer Disease. Neurochem. Res. 32, 1640–1645 (2007).
  11. Bartzokis, G. Alzheimer’s disease as homeostatic responses to age-related myelin breakdown. Neurobiol. Aging 32, 1341–1371 (2011).
  12. Gouw, A. A. et al. Heterogeneity of white matter hyperintensities in Alzheimer’s disease: post-mortem quantitative MRI and neuropathology. Brain 131, 3286–3298 (2008).
  13. Bartzokis, G. et al. MRI evaluation of brain iron in earlier- and later-onset Parkinson’s disease and normal subjects. Magn. Reson. Imaging 17, 213–222 (1999).
  14. Berg, D. et al. Brain iron pathways and their relevance to Parkinson’s disease. J. Neurochem. 79, 225–236 (2001).
  15. Dexter, D. T. et al. Increased Nigral Iron Content and Alterations in Other Metal Ions Occurring in Brain in Parkinson’s Disease. J. Neurochem. 52, 1830–1836 (1989).
  16. Jellinger, P. D. K., Paulus, W., Grundke-Iqbal, I., Riederer, P. & Youdim, M. B. H. Brain iron and ferritin in Parkinson’s and Alzheimer’s diseases. J. Neural Transm. – Park. Dis. Dement. Sect. 2, 327–340 (1990).

 

Featured Image -- 61004

In defence of preregistration

Psychbrief has a great rebuttal to a recent paper arguing against pre-registration. Go read it!

PsychBrief

This post is a response to “Pre-Registration of Analysis of Experiments is Dangerous for Science” by Mel Slater (2016). Preregistration is stating what you’re going to do and how you’re going to do it before you collect data (for more detail, read this). Slater gives a few examples of hypothetical (but highly plausible) experiments and explains why preregistering the analyses of the studies (not preregistration of the studies themselves) would not have worked. I will reply to his comments and attempt to show why he is wrong.

Slater describes an experiment where they are conducting a between groups experimental design, with 2 conditions (experimental & control), 1 response variable, and no covariates. You find the expected result but it’s not exactly as you predicted. It turns out the result is totally explained by the gender of the participants (a variable you weren’t initially analysing but was balanced by chance). So…

View original post 1,149 more words

OKCupid Data Leak – Framing the Debate

You’ve probably heard by now that a ‘researcher’ by the name of Emil Kirkegaard released the sensitive data of 70,000 individuals from OKCupid on the Open Science framework. This is an egregious violation of research ethics and we’re already beginning to see mainstream media coverage of this unfolding story. I’ve been following this pretty closely as it involves my PhD alma mater Aarhus University. All I want to do here is collect relevant links and facts for those who may not be aware of the story. This debacle is likely going  become a key discussion piece in future debates over how to conduct open science. Jump to the bottom of this post for a live-updated collection of news coverage, blogs, and tweets as this issue unfolds.

Emil himself continues to fan flames by being totally unapologetic:

An open letter has been formed here, currently with the signatures of over 150 individuals (myself included) petitioning Aarhus University for a full statement and investigation of the issue:

https://docs.google.com/document/d/1xjSi8gFT8B2jw-O8jhXykfSusggheBl-s3ud2YBca3E/edit

Meanwhile Aarhus University has stated that Emil acted without oversight or any affiliation with AU, and that if he has claimed otherwise they intend to take (presumably legal) action:

 

I’m sure a lot more is going to be written as this story unfolds; the implications for open science are potentially huge. Already we’re seeing scientists wonder if this portends previously unappreciated risks of sharing data:

I just want to try and frame a few things. In the initial dust-up of this story there was a lot of confusion. I saw multiple accounts describing Emil as a “PI” (primary investigator), asking for his funding to be withdrawn, etc. At the time the details surrounding this was rather unclear. Now as more and more emerge it seems to paint a rather different picture, which is not being accurately portrayed so far in the media coverage:

Emil is not a ‘researcher’. He acted without any supervision or direct affiliation to AU. He is a masters student who claims on his website that he is ‘only enrolled at AU to collect SU [government funds])’. I’m seeing that most of the outlets describe this as ‘researchers release OKCupid data’. When considering the implications of this for open science and data sharing, we need to frame this as what it is: a group of hacktivists exploiting a security vulnerability under the guise of open science. NOT a university-backed research program.

What implications does this have for open science? From my perspective it looks like we need to discuss the role oversight and data protection. Ongoing twitter discussion suggests Emil violated EU data protection laws and the OKCupid terms of service. But other sources argue that this kind of scraping ‘attack’ is basically data-gathering 101 and that nearly any undergraduate with the right education could have done this. It seems like we need to have a conversation about our digital rights to data privacy, and whether those are doing enough to protect us. Doesn’t OKCupid itself hold some responsibility for allowing this data be access so easily? And what is the responsibility of the Open Science Foundation? Do we need to put stronger safeguards in place? Could an organization like anonymous, or even ISIS, ‘dox’ thousands of people and host the data there? These are extreme situations, but I think we need to frame them now before people walk away with the idea that this is an indictment of data sharing in general.

Below is a collection of tweets, blogs, and news coverage of the incident:


Tweets:

Brian Nosek on the Open Science Foundations Response:

More tweets on larger issues:

 

Emil has stated he is not acting on behalf of AU:


 

News coverage:

Vox:

http://www.vox.com/2016/5/12/11666116/70000-okcupid-users-data-release?utm_campaign=vox&utm_content=chorus&utm_medium=social&utm_source=twitter

Motherboard:

http://motherboard.vice.com/read/70000-okcupid-users-just-had-their-data-published

ZDNet:

http://www.zdnet.com/article/okcupid-user-accounts-released-for-the-titillation-of-the-internet/

Forbes:

http://www.forbes.com/sites/emmawoollacott/2016/05/13/intimate-data-of-70000-okcupid-users-released/#2533c34c19bd

http://www.themarysue.com/okcupid-profile-leak/

Here is a great example of how bad this is; Wired runs stury with headline ‘OKCupid study reveals perils of big data science:

OkCupid Study Reveals the Perils of Big-Data Science

This is not a study!  It is not ‘science’! At least not by any principle definition!


Blogs:

https://ironholds.org/blog/when-science-goes-bad-consent-data-and-doubling-down-on-the-internet/

https://sakaluk.wordpress.com/2016/05/12/10-on-the-osfokcupid-data-dump-a-batman-analogy/

http://emilygorcenski.com/blog/when-open-science-isn-t-the-okcupid-data-breach

Here is a defense of Emil’s actions:
https://artir.wordpress.com/2016/05/13/in-defense-of-emil-kirkegaard/

 

Thelma_Louise_cliff (1)

Is Frontiers in Trouble?

Lately it seems like the rising tide is going against Frontiers. Originally hailed as a revolutionary open-access publishing model, the publishing group has been subject to intense criticism in recent years. Recent issues include being placed on Beall’s controversial ‘predatory publisher list‘, multiple high profile disputes at the editorial level, and controversy over HIV and vaccine denialist articles published in the journal seemingly without peer review. As a proud author of two Frontiers articles and former frequent reviewer, these issues compounded with a general poor perception of the journal recently led me to stop all publication activities at Frontiers outlets. Although the official response from Frontiers to these issues has been mixed, yesterday a mass-email from a section editor caught my eye:

Dear Review Editors, Dear friends and colleagues,

As some of you may know, Prof. Philippe Schyns recently stepped down from his role as Specialty Chief Editor in Frontiersin Perception Science, and I have been given the honor and responsibility of succeeding him into this function. I wish to extend to him my thanks and appreciation for the hard work he has put in building this journal from the ground up. I will strive to continue his work and maintain Frontiers in Perception Science as one of the primary journals of the field. This task cannot be achieved without the support of a dynamic team of Associate Editors, Review Editors and Reviewers, and I am grateful for all your past, and hopefully future efforts in promoting the journal.

It am aware that many scientists in our community have grown disappointed or even defiant of the Frontiers publishing model in general, and Frontiers in Perception Science is no exception here. Among the foremost concerns are the initial annoyance and ensuing disinterest produced by the automated editor/reviewer invitation system and its spam-like messages, the apparent difficulty in rejecting inappropriate manuscripts, and (perhaps as a corollary), the poor reputation of the journal, a journal to which many authors still hesitate before submitting their work. I have experienced these troubles myself, and it was only after being thoroughly reassured by the Editorial office on most of these counts that I accepted to get involved as Specialty Chief Editor. Frontiers is revising their system, which will now leave more time for Associate Editors to mandate Review Editors before sending out automated invitations. When they occur, automated RE invitations will be targeted to the most relevant people (based on keyword descriptors), rather than broadcast to the entire board. This implies that it is very important for each of you to spend a few minutes editing the Expertise keywords on your Loop profile page. Most of these keywords were automatically collected within your publications, and they may not reflect your true area of expertise. Inappropriate expertise keywords are one of the main reasons why you receive inappropriate reviewing invitations! In the new Frontiers system, article rejection options will be made more visible to the handling Associate Editor. Although my explicit approval is still required for any manuscript rejection, I personally vow to stand behind all Associate Editors who will be compelled to reject poor-quality submissions. (While perceived impact cannot be used as a rejection criterion, poor research or writing quality and objective errors in design, analysis or interpretation can and should be used as valid causes for rejection). I hope that these measures will help limit the demands on the reviewers’ time, and contribute to advancing the standards and reputation of Frontiers in Perception Science. Each of you can also play a part in this effort by continuing to review articles that fall into your area of expertise, and by submitting your own work to the journal.

I look forward to working with all of you towards establishing Frontiers in Perception Science as a high-standard journal for our community.

It seems Frontiers is indeed aware of the problems and is hoping to bring back wary reviewers and authors. But is it too little too late? Discussing the problems at Frontiers is often met with severe criticism or outright dismissal by proponents of the OA publishing system, but I felt these neglected a wider negative perception of the publisher that has steadily grown over the past 5 years. To get a better handle on this I asked my twitter followers what they thought. 152 persons responded as follows:

As some of you requested control questions, here are a few for comparison:

 

That is a stark difference between the two top open access journals – whereas only 19% said there was no problem at Frontiers, a full 50% say there is no problem at PLOS ONE. I think we can see that even accounting for general science skepticism, opinions of Frontiers are particularly negative.

Sam Schwarzkopf also lent some additional data, comparing the whole field of major open access outlets – Frontiers again comes out poorly, although strangely so does F1000:

These data confirm what I had already feared: public perception among scientists (insofar as we can infer anything from such a poll) is lukewarm at best. Frontiers has a serious perception problem. Only 19% of 121 respondents were willing to outright say there was no problem at the journal. A full 45% said there was a serious problem, and 36% were unsure. Of course to fully evaluate these numbers, we’d like to know the baserate of similiar responses for other journals, but I cannot imagine any Frontiers author, reviewer, or editor feeling joy at these numbers – I certainly do not. Furthermore they reflect a widespread negativity I hear frequently from colleagues across the UK and Denmark.

What underlies this negative perception? As many proponents point out, Frontiers has been actually quite diligent at responding to user complaints. Controversial papers have been put immediately under review, overly spammy-review invitations and special issue invites largely ceased, and so on. I would argue the issue is not any one single mistake on the part of Frontiers leadership, but a growing history of errors contributing to a perception that the journal is following a profit-led ‘publish anything’ model. At times the journal feels totally automated, within little human care given to publishing and extremely high fees. What are some of the specific complaints I regularly hear from colleagues?

  • Spammy special issue invites. An older issue, but at Frontier’s inception many authors were inundated with constant invites to special issues, many of which were only tangentially related to author’s specialties.
  • Spammy review invites. Colleagues who signed on to be ‘Review Editors’ (basically repeat reviewers) reported being hit with as many as 10 requests to review in a month, again many without relevance to their interest
  • Related to both of the above, a perception that special issues and articles are frequently reviewed by close colleagues with little oversight. Similiarly, many special issues were edited by junior researchers at the PhD level.
  • Endless review. I’ve heard numerous complaints that even fundamentally flawed or unpublishable papers are impossible or difficult to reject. Reviewers report going through multiple rounds of charitable review, finding the paper only gets worse and worse, only to be removed from the review by editors and the paper published without them.

Again, Frontiers has responded to each of these issues in various ways. For example, Frontiers originally defended the special issues, saying that they were intended to give junior researchers an outlet to publish their ideas. Fair enough, and the spam issues have largely ceased. Still, I would argue it is the build up and repetition of these issues that has made authors and readers wary of the journal. This coupled with the high fees and feeling of automation leads to a perception that the outlet is mostly junk. This is a shame as there are certainly many high-value articles in Frontiers outlets. Nevertheless, academics are extremely bloodshy, and negative press creates a vicious feedback loop. If researchers feel Frontiers is a low-quality, spam-generating publisher who relies on overly automated processes, they are unlikely to submit their best work or review there. The quality of both drops, and the cycle intensifies.

For my part, I don’t intend to return to Frontiers unless they begin publishing reviews. I think this would go a long way to stemming many of these issues and encourage authors to judge individual articles on their own merits.

What do you think? What can be done to stem the tide? Please add your own thoughts, and stories of positive or negative experiences at Frontiers, in the comments.

____

Edit:

A final comparison question

 

 

The_Good_The_Bad_and_The_Ugly

The Wild West of Publication Reform Is Now

It’s been a while since I’ve tried out my publication reform revolutionary hat (it comes in red!), but tonight as I was winding down I came across a post I simply could not resist. Titled “Post-publication peer review and the problem of privilege” by evolutionary ecologist Stephen Heard, the post argues that we should be cautious of post-publication review schemes insofar as they may bring about a new era of privilege in research consumption. Stephen writes:

“The packaging of papers into conventional journals, following pre-publication peer review, provides an important but under-recognized service: a signalling system that conveys information about quality and breath of relevance. I know, for instance, that I’ll be interested in almost any paper in The American Naturalist*. That the paper was judged (by peer reviewers and editors) suitable for that journal tells me two things: that it’s very good, and that it has broad implications beyond its particular topic (so I might want to read it even if it isn’t exactly in my own sub-sub-discipline). Take away that peer-review-provided signalling, and what’s left? A firehose of undifferentiated preprints, thousands of them, that are all equal candidates for my limited reading time (such that it exists). I can’t read them all (nobody can), so I have just two options: identify things to read by keyword alerts (which work only if very narrowly focused**), or identify them by author alerts. In other words, in the absence of other signals, I’ll read papers authored by people who I already know write interesting and important papers.”

In a nutshell, Stephen turns the entire argument for PPPR and publishing reform on its head. High impact[1] journals don’t represent elitism; rather they provide the no name rising young scientist a chance to have their work read and cited. This argument really made me pause for a second as it represents the polar opposite of almost my entire worldview on the scientific game and academic publishing. In my view, top-tier journals represent an entrenched system of elitism masquerading as meritocracy. They make arbitrary, journalistic decisions that exert intense power over career advancement. If anything the self-publication revolution represents the ability of a ‘nobody’ to shake the field with a powerful argument or study.

Needless to say I was at first shocked to see this argument supported by a number of other scientists on Twitter, who felt that it represented “everything wrong with the anti-journal rhetoric” spouted by loons such as myself. But then I remembered that in fact this is a version of an argument I hear almost weekly when similar discussions come up with colleagues. Ever since I wrote my pie-in-the sky self-publishing manifesto (don’t call it a manifesto!), I’ve been subjected (and rightly so!) to a kind of trial-by-peers as a de facto representative of the ‘revolution’. Most recently I was even cornered at a holiday party by a large and intimidating physicist who yelled at me that I was naïve and that “my system” would never work, for almost the exact reasons raised in Stephen’s post. So lets take a look at what these common worries are.

The Filter Problem

Bar none the first, most common complaint I hear when talking about various forms of publication reform is the “filter problem”. Stephen describes the fear quite succinctly; how will we ever find the stuff worth reading when the data deluge hits? How can we sort the wheat from the chaff, if journals don’t do it for us?

I used to take this problem seriously, and try to dream up all kinds of neato reddit-like schemes to solve it. But the truth is, it just represents a way of thinking that is rapidly becoming irrelevant. Journal based indexing isn’t a useful way to find papers. It is one signal in a sea of information and it isn’t at all clear what it actually represents. I feel like people who worry about the filter bubble tend to be more senior scientists who already struggle to keep up with the literature. For one thing, science is marked by an incessant march towards specialization. The notion that hundreds of people must read and cite our work for it to be meaningful is largely poppycock. The average paper is mostly technical, incremental, and obvious in nature. This is absolutely fine and necessary – not everything can be ground breaking and even the breakthroughs must be vetted in projects that are by definition less so. For the average paper then, being regularly cited by 20-50 people is damn good and likely represents the total target audience in that topic area. If you network to those people using social media and traditional conferences, it really isn’t hard to get your paper in their hands.

Moreover, the truly ground breaking stuff will find its audience no matter where it is published. We solve the filter problem every single day, by publically sharing and discussing papers that interest us. Arguing that we need journals to solve this problem ignores the fact that they obscure good papers behind meaningless brands, and more importantly, that scientists are perfectly capable of identifying excellent papers from content alone. You can smell a relevant paper from a mile away – regardless of where it is published! We don’t need to wait for some pie in the sky centralised service to solve this ‘problem’ (although someday once the dust settles i’m sure such things will be useful). Just go out and read some papers that interest you! Follow some interesting people on twitter. Develop a professional network worth having! And don’t buy into the idea that the whole world must read your paper for it to be worth it.

The Privilege Problem 

Ok, so lets say you agree with me to this point. Using some combination of email, social media, alerts, and RSS you feel fully capable of finding relevant stuff for your research (I do!). But your worried about this brave new world where people archive any old rubbish they like and embittered post-docs descend to sneer gleefully at it from the dark recesses of pubpeer. Won’t the new system be subject to favouritism, cults of personality, and the privilege of the elite? As Stephen says, isn’t it likely that popular persons will have their papers reviewed and promoted and all the rest will fade to the back?

The answer is yes and no. As I’ve said many times, there is no utopia. We can and must fight for a better system, but cheaters will always find away[2]. No matter how much transparency and rigor we implement, someone is going to find a loophole. And the oldest of all loopholes is good old human-corruption and hero worship. I’ve personally advocated for a day when data, code, and interpretation are all separate publishable, citable items that each contribute to ones CV. In this brave new world PPPRs would be performed by ‘review cliques’ who build up their reputation as reliable reviewers by consistently giving high marks to science objects that go on to garner acclaim, are rarely retracted, and perform well on various meta-analytic robustness indices (reproducibility, transparency, documentation, novelty, etc). They won’t replace or supplant pre-publication peer review. Rather we can ‘let a million flowers bloom’. I am all for a continuum of rigor, ranging from preregistered, confirmatory research with pre and post peer review, to fully exploratory, data driven science that is simply uploaded to a repository with a ‘use at your peril’ warning’. We don’t need to pit one reform tool against another; the brave new world will be a hybrid mixture of every tool we have at our disposal. Such a system would be massively transparent, but of course not perfect. We’d gain a cornucopia of new metrics by which to weight and reward scientists, but assuredly some clever folks would benefit more than others. We need to be ready when that day comes, aware of whatever pitfalls may bely our brave new science.

Welcome to the Wild West

Honestly though, all this kind of talk is just pointless. We all have our own opinions of what will be the best way to do science, or what will happen. For my own part I am sure some version of this sci-fi depiction is inevitable. But it doesn’t matter because the revolution is here, it’s now, it’s changing the way we consume and produce science right before our very eyes. Every day a new preprint lands on twitter with a massive splash. Just last week in my own field of cognitive neuroscience a preprint on problems in cluster inference for fMRI rocked the field, threatening to undermine thousands of existing papers while generating heated discussion in the majority of labs around the world. The week before that #cingulategate erupted when PNAS published a paper which was met with instant outcry and roundly debunked by an incredibly series of thorough post-publication reviews. A multitude of high-profile fraud cases have been exposed, and careers ended, via anonymous comments on pubpeer. People are out there, right now finding and sharing papers, discussing the ones that matter, and arguing about the ones that don’t. The future is now and we have almost no idea what shape it is taking, who the players are, or what it means for the future of funding and training. We need to stop acting like this is some fantasy future 10 years from now; we have entered the wild west and it is time to discuss what that means for science.

Authors note: In case it isn’t clear, i’m quite glad that Stephen raised the important issue of privilege. I am sure that there are problems to be rooted out and discussed along these lines, particularly in terms of the way PPPR and filtering is accomplished now in our wild west. What I object to is the idea that the future will look like it does now; we must imagine a future where science is radically improved!

[1] I’m not sure if Stephen meant high impact as I don’t know the IF of American Naturalist, maybe he just meant ‘journals I like’.

[2] Honestly this is where we need to discuss changing the hyper-capitalist system of funding and incentives surrounding publication but that is another post entirely! Maybe people wouldn’t cheat so much if we didn’t pit them against a thousand other scientists in a no-holds-barred cage match to the death.

f2

Predictive coding and how the dynamical Bayesian brain achieves specialization and integration

Authors note: this marks the first in a new series of journal-entry style posts in which I write freely about things I like to think about. The style is meant to be informal and off the cuff, building towards a sort of socratic dialogue. Please feel free to argue or debate any point you like. These are meant to serve as exercises in writing and thinking,  to improve the quality of both and lay groundwork for future papers. 

My wife Francesca and I are spending the winter holidays vacationing in the north Italian countryside with her family. Today in our free time our discussions turned to how predictive coding and generative models can accomplish the multimodal perception that characterizes the brain. To this end Francesca asked a question we found particularly thought provoking: if the brain at all levels is only communicating forward what is not predicted (prediction error), how can you explain the functional specialization that characterizes the different senses? For example, if each sensory hierarchy is only communicating prediction errors, what explains their unique specialization in terms of e.g. the frequency, intensity, or quality of sensory inputs? Put another way, how can the different sensations be represented, if the entire brain is only communicating in one format?

We found this quite interesting, as it seems straightforward and yet the answer lies at the very basis of predictive coding schemes. To arrive at an answer we first had to lay a little groundwork in terms of information theory and basic neurobiology. What follows is a grossly oversimplified account of the basic neurobiology of perception, which serves only as a kind of philosopher’s toy example to consider the question. Please feel free to correct any gross misunderstandings.

To begin, it is clear at least according to Shannon’s theory of information, that any sensory property can be encoded in a simple system of ones and zeros (or nerve impulses). Frequency, time, intensity, and so on can all be re-described in terms of a simplistic encoding scheme. If this were not the case then modern television wouldn’t work. Second, each sensory hierarchy presumably  begins with a sensory effector, which directly transduces physical fluctuations into a neuronal code. For example, in the auditory hierarchy the cochlea contains small hairs that vibrate only to a particular frequency of sound wave. This vibration, through a complex neuro-mechanic relay, results in a tonitopic depolarization of first order neurons in the spiral ganglion.

f1
The human cochlea, a fascinating neural-mechanic apparatus to directly transduce air vibrations into neural representations.

It is here at the first-order neuron where the hierarchy presumably begins, and also where functional specialization becomes possible. It seems to us that predictive coding should say that the first neuron is simply predicting a particular pattern of inputs, which correspond directly to an expected external physical property. To try and give a toy example, say we present the brain with a series of tones, which reliably increase in frequency at 1 Hz intervals. At the lowest level the neuron will fire at a constant rate if the frequency at interval n is 1 greater than the previous interval, and will fire more or less if the frequency is greater or less than this basic expectation, creating a positive or negative prediction error (remember that the neuron should only alter its firing pattern if something unexpected happens). Since frequency here is being signaled directly by the mechanical vibration of the cochlear hairs; the first order neuron is simply predicting which frequency will be signaled. More realistically, each sensory neuron is probably only predicting whether or not a particular frequency will be signaled – we know from neurobiology that low-level neurons are basically tuned to a particular sensory feature, whereas higher level neurons encode receptive fields across multiple neurons or features. All this is to say that the first-order neuron is specialized for frequency because all it can predict is frequency; the only afferent input is the direct result of sensory transduction. The point here is that specialization in each sensory system arises in virtue of the fact that the inputs correspond directly to a physical property.

f2
Presumably, first order neurons predict the presence or absence of a particular, specialized sensory feature owing to their input. Credit: wikipedia.

Now, as one ascends higher in the hierarchy, each subsequent level is predicting the activity of the previous. The first-order neuron predicts whether a given frequency is presented, the second perhaps predicts if a receptive field is activated across several similarly tuned neurons, the third predicts a particular temporal pattern across multiple receptive fields, and so on. Each subsequent level is predicting a “hyperprior” encoding a higher order feature of the previous level. Eventually we get to a level where the prediction is no longer bound to a single sensory domain, but instead has to do with complex, non-linear interactions between multiple features. A parietal neuron thus might predict that an object in the world is a bird if it sings at a particular frequency and has a particular bodily shape.

f3
The motif of hierarchical message passing which encompasses the nervous system, according the the Free Energy principle.

If this general scheme is correct, then according to hierarchical predictive coding functional specialization primarily arises in virtue of the fact that at the lowest level each hierarchy is receiving inputs that strictly correspond to a particular feature. The cochlea is picking up fluctuations in air vibration (sound), the retina is picking up fluctuations in light frequency (light), and the skin is picking up changes in thermal amplitude and tactile frequency (touch). The specialization of each system is due to the fact that each is attempting to predict higher and higher order properties of those low-level inputs, which are by definition particular to a given sensory domain. Any further specialization in the hierarchy must then arise from the fact that higher levels of the brain predict inputs from multiple sensory systems – we might find multimodal object-related areas simply because the best hyper-prior governing nonlinear relationships between frequency and shape is an amodal or cross-model object. The actual etiology of higher-level modules is a bit more complicate than this, and requires an appeal to evolution to explain in detail, but we felt this was a generally sufficient explanation of specialization.

Nonlinearity of the world and perception: prediction as integration

At this point, we felt like we had some insight into how predictive coding can explain functional specialization without needing to appeal to special classes of cortical neurons for each sensation. Beyond the sensory effectors, the function of each system can be realized simply by means of a canonical, hierarchical prediction of each layered input, right down to the point of neurons which predict which frequency will be signaled. However, something still was missing, prompting Francesca to ask – how can this scheme explain the coherent, multi-modal, integrated perception, which characterizes conscious experience?

Indeed, we certainly do not experience perception as a series of nested predictions. All of the aforementioned machinery functions seamlessly beyond the point of awareness. In phenomenology a way to describe such influences is as being prenoetic (before knowing; see also prereflective); i.e. things that influence conscious experience without themselves appearing in experience. How then can predictive coding explain the transition from segregated, feature specific predictions to the unified percept we experience?

f4
When we arrange sensory hierarchies laterally, we see the “markov blanket” structure of the brain emerge. Each level predicts the control parameters of subsequent levels. In this way integration arises naturally from the predictive brain.

As you might guess, we already hinted at part of the answer. Imagine if instead of picturing each sensory hierarchy as an isolated pyramid, we instead arrange them such that each level is parallel to its equivalent in the ‘neighboring’ hierarchy. On this view, we can see that relatively early in each hierarchy you arrive at multi-sensory neurons that are predicting conjoint expectations over multiple sensory inputs. Conveniently, this observation matches what we actually know about the brain; audition, touch, and vision all converge in tempo-parietal association areas.

Perceptual integration is thus achieved as easily as specialization; it arises from the fact that each level predicts a hyperprior on the previous level. As one moves upwards through the hierarchy, this means that each level predicts more integrated, abstract, amodal entities. Association areas don’t predict just that a certain sight or sound will appear, but instead encode a joint expectation across both (or all) modalities. Just like the fusiform face area predicts complex, nonlinear conjunctions of lower-level visual features, multimodal areas predict nonlinear interactions between the senses.

f5
A half-cat half post, or a cat behind a post? The deep convolutional nature of the brain helps us solve this and similar nonlinear problems.

It is this nonlinearity that makes predictive schemes so powerful and attractive. To understand why, consider the task the brain must solve to be useful. Sensory impressions are not generated by simple linear inputs; certainly for perception to be useful to an organism it must process the world at a level that is relevant for that organism. This is the world of objects, persons, and things, not disjointed, individual sensory properties. When I watch a cat walk behind a fence, I don’t perceive it as two halves of a cat and a fence post, but rather as a cat hidden behind a fence. These kinds of nonlinear interactions between objects and properties of the world are ubiquitous in perception; the brain must solve not for the immediately available sensory inputs but rather the complex hidden causes underlying them. This is achieved in a similar manner to a deep convolutional network; each level performs the same canonical prediction, yet together the hierarchy will extract the best-hidden features to explain the complex interactions that produce physical sensations. In this way the predictive brain summersaults the binding problem of perception; perception is integrated precisely because conjoint hypothesis are better, more useful explanations than discrete ones. As long as the network has sufficient hierarchical depth, it will always arrive at these complex representations. It’s worth noting we can observe the flip-side of this process in common visual illusions, where the higher-order percept or prior “fills in” our actual sensory experience (e.g. when we perceive a convex circle as being lit from above).

teaser-convexconcave-01
Our higher-level, integrative priors “fill in” our perception.

Beating the homunculus: the dynamic, enactive Bayesian brain

Feeling satisfied with this, Francesca and I concluded our fun holiday discussion by thinking about some common misunderstandings this scheme might lead one into. For example, the notion of hierarchical prediction explored above might lead one to expect that there has to be a “top” level, a kind of super-homunculus who sits in the prefrontal cortex, predicting the entire sensorium. This would be an impossible solution; how could any subsystem of the brain possibly predict the entire activity of the rest? And wouldn’t that level itself need to be predicted, to be realised in perception, leading to infinite regress? Luckily the intuition that these myriad hypotheses must “come together” fundamentally misunderstands the Bayesian brain.

Remember that each level is only predicting the activity of that before it. The integrative parietal neuron is not predicting the exact sensory input at the retina; rather it is only predicting what pattern of inputs it should receive if the sensory input is an apple, or a bat, or whatever. The entire scheme is linked up this way; the individual units are just stupid predictors of immediate input. It is only when you link them all up together in a deep network, that the brain can recapitulate the complex web of causal interactions that make up the world.

This point cannot be stressed enough: predictive coding is not a localizationist enterprise. Perception does not come about because a magical brain area inverts an entire world model. It comes about in virtue of the distributed, dynamic activity of the entire brain as it constantly attempts to minimize prediction error across all levels. Ultimately the “model” is not contained “anywhere” in the brain; the entire brain itself, and the full network of connection weights, is itself the model of the world. The power to predict complex nonlinear sensory causes arises because the best overall pattern of interactions will be that which most accurately (or usefully) explains sensory inputs and the complex web of interactions which causes them. You might rephrase the famous saying as “the brain is it’s own best model of the world”.

As a final consideration, it is worth noting some misconceptions may arise from the way we ourselves perform Bayesian statistics. As an experimenter, I formalize a discrete hypothesis (or set of hypotheses) about something and then invert that model to explain data in a single step. In the brain however the “inversion” is just the constant interplay of input and feedback across the nervous system at all levels. In fact, under this distributed view (at least according to the Free Energy Principle), neural computation is deeply embodied, as actions themselves complete the inferential flow to minimize error. Thus just like neural feedback, actions function as  ‘predictions’, generated by the inferential mechanism to render the world more sensible to our predictions. This ultimately minimises prediction error just as internal model updates do, albeit in a different ‘direction of fit’ (world to model, instead of model to world). In this way the ‘model’ is distributed across the brain and body; actions themselves are as much a part of the computation as the brain itself and constitute a form of “active inference”. In fact, if one extends their view to evolution, the morphological shape of the organism is itself a kind of prior, predicting the kinds of sensations, environments, and actions the agent is likely to inhabit. This intriguing idea will be the subject of a future blog post.

Conclusion

We feel this is an extremely exciting view of the brain. The idea that an organism can achieve complex intelligence simply by embedding a simple repetitive motif within a dynamical body seems to us to be a fundamentally novel approach to the mind. In future posts and papers, we hope to further explore the notions introduced here, considering questions about “where” these embodied priors come from and what they mean for the brain, as well as the role of precision in integration.

Questions? Comments? Feel like i’m an idiot? Sound off in the comments!

Further Reading:

Brown, H., Adams, R. A., Parees, I., Edwards, M., & Friston, K. (2013). Active inference, sensory attenuation and illusions. Cognitive Processing, 14(4), 411–427. http://doi.org/10.1007/s10339-013-0571-3
Feldman, H., & Friston, K. J. (2010). Attention, Uncertainty, and Free-Energy. Frontiers in Human Neuroscience, 4. http://doi.org/10.3389/fnhum.2010.00215
Friston, K., Adams, R. A., Perrinet, L., & Breakspear, M. (2012). Perceptions as Hypotheses: Saccades as Experiments. Frontiers in Psychology, 3. http://doi.org/10.3389/fpsyg.2012.00151
Friston, K., & Kiebel, S. (2009). Predictive coding under the free-energy principle. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 364(1521), 1211–1221. http://doi.org/10.1098/rstb.2008.0300
Friston, K., Thornton, C., & Clark, A. (2012). Free-Energy Minimization and the Dark-Room Problem. Frontiers in Psychology, 3. http://doi.org/10.3389/fpsyg.2012.00130
Moran, R. J., Campo, P., Symmonds, M., Stephan, K. E., Dolan, R. J., & Friston, K. J. (2013). Free Energy, Precision and Learning: The Role of Cholinergic Neuromodulation. The Journal of Neuroscience, 33(19), 8227–8236. http://doi.org/10.1523/JNEUROSCI.4255-12.2013

 

Screenshot 2015-12-04 10.58.42

How useful is twitter for academics, really?

Recently I was intrigued by a post on twitter conversion rates (e.g. the likelihood that a view on your tweet results in a click on the link) by journalist Derek Thompson at the Atlantic. Derek writes that although using twitter gives him great joy, he’s not sure it results in the kinds of readership his employers would feel merits the time spent on the service. Derek found that even his most viral tweets only resulted in a conversion rate of about 3% – on par with the click-through rate of east asian display ads (i.e. quite poorly in the media world). Using the recently released twitter metrics, Derek found an average conversion of around 1.5% with the best posts hitting the 3% ceiling. Ultimately he concludes that twitter seems to be great at generating buzz within the twitter-sphere but performs poorly at translating that buzz into external influence.

This struck my curiosity, as it definitely reflected my own experience tweeting out papers and tracking the resultant clicks on the actual paper itself. However, the demands of academia are quite different than that of corporate media. In my experience ‘good’ posts do exactly result in a 2-3% conversion rate, or about 30 clicks on the DOI link for every 1000 views. A typical post I consider ‘successful’ will net about 5-8k views and thus 150-200 clicks. Below are some samples of my most ‘successful’ paper tweets this month, with screen grabs of the twitter analytics for each:

Screenshot 2015-12-04 11.42.45

Screenshot 2015-12-04 11.44.00

Screenshot 2015-12-04 11.44.41

Screenshot 2015-12-04 11.45.29

Sharing each of these papers resulted in a conversion rate of about 2%, roughly in line with Derek’s experience. These are all what I would consider ‘successful’ shares, at least for me, with > 100 engagements each. You can also see that in total, external engagement (i.e., clicking the link to the paper) is below that of ‘internal’ engagement (likes, RTs, expands etc). So it does appear that on the whole twitter shares may generate a lot of internal ‘buzz’ but not necessarily reach very far beyond twitter.

For a corporate sponsor, these conversion rates are unacceptable. But for academics, I would argue the ceiling of the actually interested audience is somewhere around 200 people, which corresponds pretty well with the average paper clicks generated by successful posts. Academics are so highly specialized that i’d wager citation success is really more about making sure your paper falls into the right hands, rather than that of people working in totally different areas. I’d suggest that even for landmark ‘high impact’ papers eventual success will still be predicted more by the adoption of your select core peer group (i.e. other scientists who study vegetarian dinosaur poop in the Himalayan range). In my anecdotal experience, I would say that I more regularly find papers that grab my interest on twitter than any other experience. Moreover, unlike ‘self finds’, it seems to me papers found on twitter are more likely to be outside my immediate wheelhouse – statistics publications, genetics, etc. This is an important, if difficult to quantify type of impact.

In general, we have  to ask what exactly is a 2-3% conversion rate worth? If 200 people click my paper link, are any of them actually reading it? To probe this a bit further I used twitters new survey tool, which recently added support for multiple options, to ask my followers about how often they read papers found on twitter:

Screenshot 2015-12-04 10.58.42.png

As you can see, out of 145 responses more than 50% in total said they read papers found on twitter “Occasionally” (52%) or “Frequently” (30%). If these numbers are at all representative, I think they are pretty reassuring for the academic twitter user. As many as ~45 out of ~150 respondents say they read papers on twitter “frequently” suggesting the service has become a major source for finding interesting papers, at least among its users. All together my take away is that while you shouldn’t expect to beat the general 3% curve, the ability to get your published work on the desks of as many as 50-100 members of your core audience is pretty powerful. This is a more tangible result than ‘engagements’ or conversion rates.

Finally, it’s worth noting that the picture on how this all relates to citation behavior is murky at best. A quick surf of the scientific literature on correlating citation rate and social media exposure is inconclusive at best. Two papers found by Neurocritic are examplars of my impression of this literature, with one claiming a large effect size and the other claiming none at all. In the end I suspect how useful twitter is for sharing research really depends on several factors including your field (e.g. probably more useful for machine learning than organic chemistry) and something i’d vaguely define as ‘network quality’. Ultimately I suspect the rule is quality of followers over quantity; if your end goal is to get your papers in the hands of a round 200 engaged readers (which twitter can do for you) then having a network that actually includes those people is probably worth more than being a ‘Kardashian’ of social media.

 

Integration dynamics and choice probabilities

Very informative post – “Integration dynamics and choice probabilities”

Pillow Lab Blog

Recently in lab meeting, I presented

Sensory integration dynamics in a hierarchical network explains choice probabilities in cortical area MT

Klaus Wimmer, Albert Compte, Alex Roxin, Diogo Peixoto, Alfonso Renart & Jaime de la Rocha. Nature Communications, 2015

Wimmer et al. reanalyze and reinterpret a classic dataset of neural recordings from MT while monkeys perform a motion discrimination task. The classic result shows that the firing rates of neurons in MT are correlated with the monkey’s choice, even when the stimulus is the same. This covariation of neural activity and choice, termed choice probability, could indicate sensory variability causing behavioral variability or it could result from top-down signals that reflect the monkey’s choice. To investigate the source of choice probabilities, the authors use a two-stage, hierarchical network model of integrate and fire neurons tuned to mimic the dynamics of MT and LIP neurons and compare the model to what they find…

View original post 436 more words