How to Make Twitter Sciency Again

If you are like most scientists on twitter, you are probably hankering for the pre-Trump era when the platform was actually a fun tool to take in the latest science news, rather than a front-row seat to the apocalypse. In this post I will point out a simple tool you can use to have that experience back again while firmly making the caveat that you should remain politically engaged and active. Nevertheless, Twitter once fulfilled a useful and much needed community service, at which it is currently suffering. And effective resistance also requires you to carry on with your day job and not go totally mad. So use this to exert conscious control over when and how you intake news, and also to get back the refreshing ‘mostly science’ twitter experience. Don’t worry, the mobile version of twitter will still be an unrelenting hellstorm of bad news.

TweetDeck Political Blacklist

The trick is simple; install the tweetdeck extension, web app, or just navigate to it in your browser. If you’ve never used Tweetdeck, it was (in my opinion) the single best twitter app before Twitter purchased it. Since then not much new development has occurred, but the existing features are still excellent. Go ahead and set up some columns however you see fit. The feature you are most interested is the ‘Mute’ blacklist, which lets you specify lists of terms that twitter will filter from your feeds:

You TweetDeck settings are found in the bottom left corner of the application.


Enter a list of words; any tweets containing those worse or phrases will not show up in your feeds.

That’s it! You may need to play around with the filter phrases; I used a combination of politician’s names (‘trump’, ‘bannon’, ‘clinton’, ‘corbyn’, etc), big scandals, current events (i.e., #brexit), and other stuff. You will probably also want to turn off the auditory notifications, as TweetDeck turns them on by default and they can be rather annoying. Now you can surf the latest science news without second by second updates that make you want to move to New Zealand.

Once again folks, use this responsibly! I use this for my twitter breaks when i’m at work (for ‘productive procrastination’), but since it doesn’t work on mobile anyway, I stay up to date at a pace of my choosing. Exerting deliberate control over your news intake is a key element to #persistent #resistance.

Edit: A reader on twitter points out that Plume offers the same ability on a mobile platform!



A Researcher’s Guide to the #Resistance

Note; the bottom of this post will be continuously updated with resources and action links. Please add any useful resources in the comments, to be added to the list.


This topic needs no introduction; if you are not already aware of the crisis and political turmoil I’m not sure this document could reach you anyway. This is for the woke scientist, scholar, and other academics ready to fight fear with resistance. I’m not exactly sure how to best arrange this document but it must be written. My goal is less to review the state of affairs, of which I’m sure you are aware, but rather to provide concrete tips and guidelines so that you can break free from ‘oh-dearism’ and leap into action.

With that in mind, lets break this down into a few sections:

  1. No action is too small.

On the progressive left, particularly among intellectuals, we have a history of infighting over which action is the best action. While I think there is merit to our culture of critical thought and inquiry in a democratic society, we are now past the time where such a response is enough. Fascism is at our doorstep and we must organize together any coalition of those willing to rise up. This means that no act of resistance is too small. I know from my interactions that the majority of academics are seriously worried about what they see in the daily news. We know that democratic principles are under serious threat, but we’re unsure of how to respond. The academic life is a rat race, and few feel ready to dedicate hours of their day to a cause whose efficacy of which they are unsure.

But this is exactly the attitude that our opponents are counting on. They are happy for us to share the latest outrages within our filter bubble, knowing we are too caught up in our daily lives to translate that outrage into action. With this in mind, it is imperative that we embrace any action. Intellectuals and academics have important skills to contribute to resistance; as the tenders and growers of knowledge we have a social duty to speak out against fascism. What we face now is nothing less than an existential threat to our craft and culture.

It is with this in mind that I earnestly beseech my colleagues and collaborators to cast aside doubt and one-upmanship. In times like these, you can’t condition your voice and action on the probability of success. Instead we need to organize together and present a unified resistance to Trumpism.

Of course, we have all busy lives to attend to. Our professional and personal commitments do not pause as we attend to Democracy. So I urge everyone to seek out the causes and actions with lie closest to their home and heart. Don’t waste time asking if your action is likely to succeed. If you feel strongly about the gagging of scientists, then join a pro-science march. Write to your MPs and professional societies asking them to publicly denounce such activities. Whatever the causes – women’s rights, social justice, the abuse and discrimination of people of color – leap into action with whatever help you can give. These movements need your talents; they need your thought, your code, your data, your critical thought and skills at debate. Your ability to communicate complex ideas in useful packages. They need these things as much if not more than your time and money – although you should also not hesitate to join the common foot soldier in standing up. Everyone’s rights are under threat, which brings me to my next point – the resistance must blossom everywhere.

  1. Neo-fascism is a global movement; the resistance must also be

Make no mistake; the current wave of authoritarian politics is not constrained to any single nation. It neither begins nor ends with Trump. Certainly in Europe there are far right movements springing up like weeds in every corner of the continent. As such the movement to resist must also be global. We must show our politicians that an injustice anywhere is an injustice everywhere, and that we will not stand for appeasement.

In eras gone past, the academic ‘intelligentsia’ played a critical role in shaping democratic reform movements. Such movements require leadership, critical thought, writing, and other such skills, which are the tradecraft of the academic. Today globalism has spread our kind far and wide around the globe. Most of us have too few local ‘resistance’ contacts. Do you know who your local union leaders are? Or where to go to work with women’s rights groups? Many of us are expatriated from our home countries, and may feel unsure if it is our place to fight in local political movements. As an American, should I join ant-Brexit movements? Should I vote in local politics? Or is this an intrusion?

The global nature of today’s oppression means that our resistance must also be global. This is tricky because ultimately we can all have the most influence in our local communities. If we are to unite together and defend Democracy, we must overcome our isolation and build a global movement with one cause celebre’; the defense of freedom. We must connect our global ties with our local leaders, to rouse the slumbering giant of the concerned majority. To do so it is vitally important that academics, scientists, and researchers reach out to their local communities and work to build politically active networks.

  1. We must organize and unite.

To overcome this isolation, which breeds inaction, we must organize. Today’s academic is woefully isolated. Many of us have moved continuously from state to state. This means that our local networks are typically impoverished, but our global networks are quite rich. We must shore up this weakness, while also capitalizing on our strengths. This means we need to start talking amongst ourselves, in the workplace and outside it. Don’t just stand at the water cooler saying ‘oh dear, it’s really all quite terrible, isn’t it’? This only contributes to the feeling of paralysis. Instead, take literally any action that builds a community around you. Organize a local action group at your university. Build a Facebook group around your international network. Find a cause that excites you and dedicate an hour or two a week to working with them. Go to marches! Marches are an important way to build community. Once you find a group of people dedicated to the resistance, offer your services to them. They are likely to be desperate for people with professional skills.

  1. Any resistance is effective resistance

Now you have probably already asked, but what can I really do? Will it really matter? I know how easy it is to fall into despair. After Brexit and the Trump election, I felt a deep darkness as never before. The triumph of fascism and radical capitalism seemed inevitable. I wallowed in self-pity, watching the unending tide of bad news, shouting ‘I told you so!’. This is totally ineffective and will make you and your colleagues feel horrible.

Here is the thing. Resistance is not about winning and losing. It’s about standing up for a moral cause; about drawing a line in the sand and saying: “here I will go no futher”. To paraphrase MLK Jr; “if you do nothing out of fear, then you are already dead inside”. History will judge us for the action we take in the coming weeks and months. Are you going to wait until someone drags a friend or coworker out of bed? Would you be more likely to stand up and fight knowing that the boots of fascism have been just past your door?

Now is the time to stand. And as an academic you have many ways to fight. Chances are you’ve been through at least a decade of advanced training in skills which are vital to any democratic movement. So shake off the chains of defeatism and DO SOMETHING! Your resistance, no matter how small or focused, sends a message to your friends and colleagues. It tells the oppressed that no, you will not stand idly by as they are persecuted. Trust me; you will sleep better and breath more easily with each and every action you take.

  1. Self-Care

As a researcher/scientist/academic, it is likely that you were already on the edge of burn out before our world imploded. For your resistance to be sustained, it must be self-nurturing. While marching and acting can be an effective way to retain a feeling of control, it must also be moderated by self-care and practical constraints. This means regulating your information intake and being disciplined about how and when you resist.

A few practical tips; reward yourself for effective action. If you go to a march, or write an essay on the evils of fascism, also take time out to relax. Read a good book, play some videogames, go for a walk. Take time to remember what it is you are fighting for. This goes double for social media. By this point is probably clear that Twitter, Facebook, and similar outlets are going to be a never ending stream of bad news, as well as an organizing hub for the resistance. While it’s vital that you participate in these forums and remain well-informed, you can also easily burn yourself out. Set specific times of the day when you take in the latest news and social media, and other times when you turn off these inputs and work on your own things. Again, if you are engaged in concrete action, there is no reason to feel guilty about taking time for yourself and your work.

  1. Use your time wisely – do not feed the trolls

While it might be an effective way to blow off steam, I recommend avoiding the pro-Trump/Brexit/Le Pen trolls entirely. I know this isn’t easy for most of us, as we want to believe that free debate and the exchange of ideas can solve most of the world’s problems. The issue is that, we are also fighting an unprecedented information war. Remember that 2.5 million more people voted for Hillary than for Trump. We have the moral high ground here; we’re fighting against fascism, and they are fighting for it. Not only is it unnecessary to convince these people, it is almost certainly impossible. What we need now to is build an effective resistance; the authoritarians will either realize the error of their ways and join us, or be judged by history accordingly. And the sad truth is, many of these accounts are likely fake, ‘astroturfed’ trolls being paid to support the radical right agenda. It just isn’t worth your time and energy; research suggests arguing with these people may actually strengthen their resolve. This also applies to the far-left; those who voted for Jill Stein because Hillary was ‘the same as Trump’. We need to be focused on turning out the moderate, silent majority, who are appalled at what they see in the news but have no idea how to stop it.

  1. Concrete action

 Hopefully by now you are on your feet, ready to act. So what CAN you do? First, you need to choose a domain of resistance. The best thing you can do is to find forms of sustainable resistance. You can’t go out and lose your job; this only reduces the longevity and depth of your possible action. The first step to effective action is therefor to select a cause, which is geographically and morally closest to you and your heart. This will help you build local roots and a community from which to grow your action. It will keep you motivated and prevent the tendency towards defeatism. We are all only human; for a resistance to be sustained it must come from a wellspring of the heart. It should enrich and grow your wellbeing, not sacrifice it[1].

With that in mind, here are some concrete ideas for how you can best resist:

  • Write. As an academic, you likely have a talent for thoughtful and persuasive writing. Write letters to your local newspaper, your political representative, on your blog, on facebook. Don’t just spread alarmism; state with force your opposition to concrete policies. Advocate for clear and decisive action. Lobby your representatives frequently and let them know that you and your colleagues will be voting and donating in kind. If you are an academic of prestige or status, then don’t leave that part out. Use your voice to provide the movement with clear and concrete leadership.
  • Call. Right now, go find a phone number for your local representative. If you are in the US, it’s important that you call your specific reps even if they are not from your party. It is important not just to call once, but also to call repeatedly. Set a schedule to make a phone call once a week, to give your representative an earful. If they are democrats, insist that they refuse to give a single inch to the GOP. If they are GOP, let them know loudly that you will disagree with their actions. Before calling, consider reading this excellent guide which can help you understand how to make your voice heard most effectively. Calling and/or writing to your representative is an easy way you can make a difference, and it doesn’t need to take more than 15 minutes a week.
  • Organize. Reach out to your local movement of choice, and get out there and help them. It isn’t enough to just tweet and share. These groups are desperate for help in a variety of ways, at least some of which you are probably skilled at. They need slogans, leadership, debate, and good writing copy. You could for example dedicate one week a month to lending some expertise to these groups. At the very least; march. It shows solidarity and helps us all feel less alone. Don’t forget to also write and lobby your existing scientific organizations; if enough of us pressure our professional societies to take a stand, it can have a massive effect.
  • Code. Are you a data scientist? A web developer? A social media socialite? This is the 21st century. Our resistance doesn’t have to just take the form of just calls and letters. Chances are your technical skills are in high demand. Web apps to organize; OPSEC documents to protect activist privacy; data science to analyze and optimize resistance. Our opponents are winning in part because they are using data science and social media to overwhelm traditional outlets. Your ability to process, analyze, interpret, or communicate data can be invaluable.
  • Donate. Choose at lease one professional organization and consider making a monthly, recurring donation. My choice is the ACLU as they have already shown an ability to fight Trumpism in the judiciary. But there is no shortage of causes needing your help. Attenant to the above, consider also getting in contact with local organizations to see if you can offer concrete help.
  • Teach. As an academic, you have a lot of experience teaching to an audience. Within the boundaries of your university ethics, use that podium for good. Get your students to consider the ways they can become politically active. Chances are you university has local political clubs (e.g., anti-war, pro-privacy, environmental) who are in need of your sponsorship or leadership. If not, consider starting one. Your critical thought and rhetorical skills can help motivate the youth to the streets and polls.
  • Vote. There is still a chance to stop this at the ballot box. But only if we get out there and help opposition parties. We must stop infighting and start supporting politicians who resist. The tea party effectively stymied Obama, one of the most popular politicians in recent history, by implementing a simple, unified vision for resistance. Any politician who worked with Obama, they primaried. Any GOP member who voiced a strong opposition, they supported. We must adopt these techniques. Any politician who shows any hint of appeasement must be opposed. You should consider getting involved in your local political groups, to maximize your impact on your local MP/representative/etc. Start a facebook group of people you know, to make sure you are all voting in local and midterm elections. We must fight fire with fire, and the best way to do this is to democratically stymy pro-fascist movements from the ground up.
  • Science. As a scientist, just sharing your data with the public is an act of resistance. The authoritarians seek to control the very flow of information itself. Reaching out to share your data and scientific knowledge is thus a powerful form of resistance. Use whatever data you have to back up the movement. Remember that we must always keep the high ground of truth on our side.
  • Create. Use your creativity to write poetry, speeches, to paint pictures of resistance. Craft t-shirts and colourful poster art. A resistance is sustained by it’s art; in this time of need we must unleash our most creative instincts in the fight for democracy.


  1. The future, and closing thoughts.

Many of us likely feel some degree of guilt; how could we have let this happen? There is no doubt that we became so caught up on the daily machinery of productivity that we have become complacent. But it is never too late to act; so long as one remains free to do so. There will be a time for careful debate and self-incrimination. It speaks volumes that many of us are only now awakening to the dire situation of civilization. We owe people of color, gay and trans, immigrant and all oppressed an apology for our complacency. But do so standing side by side with them in the #resistance. Now we must act together, in hopes of retaining our freedom to dissent.

Ultimately, we must look with hope towards the future. No matter how dark the headlines come, we must know that we will stand together in solidarity. Each day look deep inside and stoke that fire of resistance; know that our goal must not only be to push back the darkness of fascism, but to stem the wound from which it arose. We must build a better society, together.

Resources for the Resistant Academic (continuously updated):

Organizations Worth Donating To – consider a recurring 5$ donation! That cup of coffee can help sustain the fight.


Staying safe when resisting:

Surveillance Self-Defense:

View story at

Practical guides to resistance:

Weekly action Checklist:

Excellent tool to organize daily calls to your representative:

How to be your own light in an authoritarian crisis:

Rules for a Constitutional Crisis:

View story at

A Trump Resistance Guide

Impeaching the president – a primer:

An activists guide to exploiting the media

Useful media and tweets:


Data for Democracy – a great portal for data scientists who want to resist

Upcoming Marches

The march for science:

Huge anti-brexit march:

Meet the scientists affected by the Muslim Travel Ban:

Resistance Leaders – must-follow voices:

Operational Security (OPSEC):


A resistance playlist – for when you need some moral building resistance songs!

[1] Note that this only applies in some case. If the brown shirts are coming for your neighbor, you may have to choose between submission and self-sacrifice. It is important to decide now how you will act in this seemingly absurd, but not improbable scenario. But for now I urge you to take effective and sustainable action, for all is not yet lost.


Unexpected arousal shapes confidence – blog and news coverage

For those looking for a good summary of our recent publication, several outlets gave us solid coverage for expert and non-expert alike. Here is a short summary of the most useful write-ups:

The eLife digest itself was excellent – make sure to fill out the survey at the end to let eLife know what you think of the digests  (I love them).

via Arousing confidence – Brains and Behaviour – Medium

As you read the words on this page, you might also notice a growing feeling of confidence that you understand their meaning. Every day we make decisions based on ambiguous information and in response to factors over which we have little or no control. Yet rather than being constantly paralysed by doubt, we generally feel reasonably confident about our choices. So where does this feeling of confidence come from?

Computational models of human decision-making assume that our confidence depends on the quality of the information available to us: the less ambiguous this information, the more confident we should feel. According to this idea, the information on which we base our decisions is also the information that determines how confident we are that those decisions are correct. However, recent experiments suggest that this is not the whole story. Instead, our internal states — specifically how our heart is beating and how alert we are — may influence our confidence in our decisions without affecting the decisions themselves.

To test this possibility, Micah Allen and co-workers asked volunteers to decide whether dots on a screen were moving to the left or to the right, and to indicate how confident they were in their choice. As the task became objectively more difficult, the volunteers became less confident about their decisions. However, increasing the volunteers’ alertness or “arousal” levels immediately before a trial countered this effect, showing that task difficulty is not the only factor that determines confidence. Measures of arousal — specifically heart rate and pupil dilation — were also related to how confident the volunteers felt on each trial. These results suggest that unconscious processes might exert a subtle influence on our conscious, reflective decisions, independently of the accuracy of the decisions themselves.

The next step will be to develop more refined mathematical models of perception and decision-making to quantify the exact impact of arousal and other bodily sensations on confidence. The results may also be relevant to understanding clinical disorders, such as anxiety and depression, where changes in arousal might lock sufferers into an unrealistically certain or uncertain world.

The PNAS journal club also published a useful summary, including some great quotes from Phil Corlett and Rebecca Todd:

via Journal Club: How your body feels influences your confidence levels | National Academy of Sciences

… Allen’s findings are “relevant to anyone whose job is to make difficult perceptual judgments trying to see signal in a lot of noise,” such as radiologists or baggage inspectors, says cognitive neuroscientist Rebecca Todd at the University of British Columbia in Vancouver, who did not take part in the research. Todd suggests that people who apply decision-making models to real world problems need to better account for the influence of internal or emotional states on confidence.

The fact that bodily states can influence confidence may even shed light on mental disorders, which often involve blunted or heightened signals from the body. Symptoms could result from how changes in sensory input affect perceptual decision-making, says cognitive neuroscientist and schizophrenia researcher Phil Corlett at Yale University, who did not participate in this study.

Corlett notes that some of the same ion channels involved in regulating heart rate are implicated in schizophrenia as well. “Maybe boosting heart rate might lead people with schizophrenia to see or hear things that aren’t present,” he speculates, adding that future work could analyze how people with mental disorders perform on these tasks…

I also wrote a blog post summarizing the article for The Conversation:

via How subtle changes in our bodies affect conscious awareness and decision confidence

How do we become aware of our own thoughts and feelings? And what enables us to know when we’ve made a good or bad decision? Every day we are confronted with ambiguous situations. If we want to learn from our mistakes, it is important that we sometimes reflect on our decisions. Did I make the right choice when I leveraged my house mortgage against the market? Was that stop light green or red? Did I really hear a footstep in the attic, or was it just the wind?

When events are more uncertain, for example if our windscreen fogs up while driving, we are typically less confident in what we’ve seen or decided. This ability to consciously examine our own experiences, sometimes called introspection, is thought to depend on the brain appraising how reliable or “noisy” the information driving those experiences is. Some scientists and philosophers believe that this capacity for introspection is a necessary feature of consciousness itself, forging the crucial link between sensation and awareness.

One important theory is that the brain acts as a kind of statistician, weighting options by their reliability, to produce a feeling of confidence more or less in line with what we’ve actually seen, felt or done. And although this theory does a reasonably good job of explaining our confidence in a variety of settings, it neglects an important fact about our brains – they are situated within our bodies. Even now, as you read the words on this page, you might have some passing awareness of how your socks sit on your feet, how fast your heart is beating or if the room is the right temperature.

Even if you were not fully aware of these things, the body is always shaping how we experience ourselves and the world around us. That is to say experience is always from somewhere, embodied within a particular perspective. Indeed, recent research suggests that our conscious awareness of the world is very much dependent on exactly these kinds of internal bodily states. But what about confidence? Is it possible that when I reflect on what I’ve just seen or felt, my body is acting behind the scenes? …

The New Scientist took an interesting angle not as explored in the other write-ups, and also included a good response from Ariel Zylberberg:

via A bit of disgust can change how confident you feel | New Scientist

“We were tricking the brain and changing the body in a way that had nothing to do with the task,” Allen says. In doing so, they showed that a person’s sense of confidence relies on internal as well as external signals – and the balance can be shifted by increasing your alertness.

Allen thinks the reaction to disgust suppressed the “noise” created by the more varied movement of the dots during the more difficult versions of the task. “They’re taking their own confidence as a cue and ignoring the stimulus in the world.”

“It’s surprising that they show that confidence can be motivated by processes inside a person, instead of what we tend to believe, which is that confidence should be motivated by external things that affect a decision,” says Ariel Zylberberg at Columbia University in New York. “Disgust leads to aversion. If you try a food and it’s disgusting, you walk away from it,” says Zylberberg. “Here, if you induce disgust, high confidence becomes lower and low confidence becomes higher. It could be that disgust is generating this repulsion.”

It is not clear whether it is the feeling of disgust that changes a person’s confidence in this way, or whether inducing alertness with a different emotion, such as anger or fear, would have the same effect.

You can find all the coverage for our article using these excellent services, altmetric & ImpactStory.

Thanks to everyone who shared, enjoyed, and interacted with our research!

Spoutwood drum circle_Painting QueenBlog

fMRI study of Shamans tripping out to phat drumbeats

Every now and then, i’m browsing RSS on the tube commute and come across a study that makes me laugh out loud. This of course results in me receiving lots of ‘tuts’ from my co-commuters. Anyhow, the latest such entry to the world of cognitive neuroscience is a study examining brain response to drum beats in shamanic practitioners. Michael Hove and colleagues of the Max Planck Institute in Leipzig set out to study “Perceptual Decoupling During an Absorptive State of Consciousness” using functional magnetic resonance imaging (fMRI). What exactly does that mean? Apparently: looking at how brain connectivity in ‘experienced shamanic practitioners’ changes when they listen to  rhythmic drumming. Hove and colleagues explain that across a variety of cultures, ‘quasi-isochronous drumming’ is used to induce ‘trance states’. If you’ve ever been dancing around a drum circle in the full moon light, or tranced out to shpongle in your living room, I guess you get the feeling right?

Anyway, Hove et al recruited 15 participants who were trained in  “core shamanism,” described as:

“a system of techniques developed and codified by Michael Harner (1990) based on cross-cultural commonalities among shamanic traditions. Participants were recruited through the German-language newsletter of the Foundation of Shamanic Studies and by word of mouth.”

They then played these participants rhythmic isochronous drumming (trance condition) versus drumming with a more regular timing. In what might be the greatest use of a Likert scale of all time, Participants rated if [they] “would describe your experience as a deep shamanic journey?” (1 = not at all; 7 = very much so)”, and indeed described the trance condition as more well, trancey. Hove and colleagues then used a fairly standard connectivity analysis, examining eigenvector centrality differences between the two drumming conditions, as well as seed-based functional connectivity:



Hove et al report that compared to the non-trance conditions, the posterior/dorsal cingulate, insula, and auditory brainstem regions become more ‘hublike’, as indicated by a higher overall degree centrality of these regions. Further, they experienced stronger functionally connectivity with the posterior cingulate cortex. I’ll let Hove and colleagues explain what to make of this:

“In sum, shamanic trance involved cooperation of brain networks associated with internal thought and cognitive control, as well as a dampening of sensory processing. This network configuration could enable an extended internal train of thought wherein integration and moments of insight can occur. Previous neuroscience work on trance is scant, but these results indicate that successful induction of a shamanic trance involves a reconfiguration of connectivity between brain regions that is consistent across individuals and thus cannot be dismissed as an empty ritual.”

Ultimately the authors conclusion seems to be that these brain connectivity differences show that, if nothing else, something must be ‘really going on’ in shamanic states. To be honest, i’m not really sure anyone disagreed with that to begin with. Collectively I can’t critique this study without thinking of early (and ongoing) meditation research, where esoteric monks are placed in scanners to show that ‘something really is going on’ in meditation. This argument to me seems to rely on a folk-psychological misunderstanding of how the brain works. Even in placebo conditioning, a typical example of a ‘mental effect’, we know of course that changes in the brain are responsible. Every experience (regardless how complex) has some neural correlate. The trick is to relate these neural factors to behavioral ones in a way that actually advances our understanding of the mechanisms and experiences that generate them. The difficulty with these kinds of studies is that all we can do is perform reverse inference to try and interpret what is going on; the authors conclusion about changes in sensory processing is a clear example of this. What do changes in brain activity actually tell us about trance (and other esoteric) states ? Certainly they don’t reveal any particular mechanism or phenomenological quality, without being coupled to some meaningful understanding of the states themselves. As a clear example, we’re surely pushing reductionism to its limit by asking participants to rate a self-described transcendent state using a unidirectional likert scale? The authors do cite Francisco Varela (a pioneer of neurophenemonological methods), but don’t seem to further consider these limitations or possible future directions.

Overall, I don’t want to seem overly critical of this amusing study. Certainly shamanic traditions are a deeply important part of human cultural history, and understanding how they impact us emotionally, cognitively, and neurologically is a valuable goal. For what amounts to a small pilot study, the protocols seem fairly standard from a neuroscience standpoint. I’m less certain about who these ‘shamans’ actually are, in terms of what their practice actually constitutes, or how to think about the supposed ‘trance states’, but I suppose ‘something interesting’ was definitely going on. The trick is knowing exactly what that ‘something’ is.

Future studies might thus benefit from a better direct characterization of esoteric states and the cultural practices that generate them, perhaps through collaboration with an anthropologist and/or the application of phenemonological and psychophysical methods. For now however, i’ll just have to head to my local drum circle and vibe out the answers to these questions.

Hove MJ, Stelzer J, Nierhaus T, Thiel SD, Gundlach C, Margulies DS, Van Dijk KRA, Turner R, Keller PE, Merker B (2016) Brain Network Reconfiguration and Perceptual Decoupling During an Absorptive State of Consciousness. Cerebral Cortex 26:3116–3124.



Mapping the effects of age on brain iron, myelination, and macromolecules – with data!

The structure, function, and connectivity of the brain changes considerably as we age1–4. Recent advances in MRI physics and neuroimaging have led to the development of new techniques which allow researchers to map quantitative parameters sensitive to key histological brain factors such as iron and myelination5–7. These quantitative techniques reveal the microstructure of the brain by leveraging our knowledge about how different tissue types respond to specialized MRI-sequences, in a fashion similar to diffusion-tensor imaging, combined with biophysical modelling. Here at the Wellcome Trust Centre for Neuroimaging, our physicists and methods specialists have teamed up to push these methods to their limit, delivering sub-millimetre, whole-brain acquisition techniques that can be completed in less than 30 minutes. By combining advanced biophysical modelling with specialized image co-registration, segmentation, and normalization routines in a process known as ‘voxel-based quantification’ (VBQ), these methods allow us to image key markers of histological brain factors. Here is a quick description of the method from a primer at our centre’s website:

Anatomical MR imaging has not only become a cornerstone in clinical diagnosis but also in neuroscience research. The great majority of anatomical studies rely on T1-weighted images for morphometric analysis of local gray matter volume using voxel-based morphometry (VBM). VBM provides insight into macroscopic volume changes that may highlight differences between groups; be associated with pathology or be indicative of plasticity. A complimentary approach that has sensitivity to tissue microstructure is high resolution quantitative imaging. Whereas in T1-weighted images the signal intensity is in arbitrary units and cannot be compared across sites or even scanning sessions, quantitative imaging can provide neuroimaging biomarkers for myelination, water and iron levels that are absolute measures comparable across imaging sites and time points.

These biomarkers are particularly important for understanding aging, development, and neurodegeneration throughout the lifespan. Iron in particular is critical for the healthy development and maintenance of neurons, where it is used to drive ATP in glial support cells to create and maintain the myelin sheaths that are critical for neural function. Nutritional iron deficiency during foetal, childhood, or even adolescent development is linked to impaired memory and learning, and altered hippocampal function and structure8,9. Although iron homeostasis in the brain is hugely complex and poorly understood, we know that run-away iron in the brain is a key factor in degenerative diseases like Alzheimer’s and Parkinson’s10–16. Data from both neuroimaging and post-mortem studies indicate that brain iron increases throughout the lifespan, particular in structures rich in neuromelanin such as the basal ganglia, caudate, and hippocampus. In Alzheimer’s and Parkinson’s for example, it is thought that runaway iron in these structures eventually overwhelms the glial systems responsible for chelating (processing) iron, and as iron becomes neurotoxic at excessive levels, leading to a runaway chain of neural atrophy throughout the brain. Although we don’t know how this process begins (scientist believe factors including stress and disease-related neuroinflammation, normal aging processes, and genetics all probably contribute), understanding how iron and myelination change over the lifespan is a crucial step towards understanding these diseases. Furthermore, because VBQ provides quantitative markers, data can be pooled and compared across research centres.

Recently I’ve been doing a lot of work with VBQ, examining for example how individual differences in metacognition and empathy relate to brain microstructure. One thing we were interested in doing with our data was examining if we could follow-up on previous work from our centre showing wide-spread age-related changes in iron and myelination. This was a pretty easy analysis to do using our 59 subjects, so I quickly put together a standard multiple regression model including age, gender, and total intracranial volume. Below are the maps for magnetization transfer (MT),  longitudinal  relaxation  rate (R1),  and  effective  transverse relaxation rate (R2*), which measure brain macromolecules/water, myelination, and iron respectively (click each image to see explore the map in neurovault!). All maps are FWE-cluster corrected, adjusting for non-sphericity, at a p < 0.001 inclusion threshold.


Effect of aging on MT
Effect of aging on MT

You can see that there is increased MT throughout the brain, particularly in the amygdala, post central gyrus, thalamus, and other midbrain and prefrontal areas. MT (roughly) measures water in the brain, and is mostly sensitive to myelination and macromolecules such as microglia and astrocytes. Interestingly our findings here contrast to Callaghan et al (2014), who found decreases in myelination whereas we find increases. This is probably explained by differences in our samples.


Effect of aging on R1
Effect of aging on R1

R1 shows much more restricted effects, with increased R1 only in the left post-central gyrus, at least in this sample. This is in contrast to Callaghan et al2  who found extensive negative MT & R1 effects, but that was in a much larger sample and with a much wider age-related variation (19-75, mean = 45). Interestingly, Martina and colleagues actually reported widespread decreases in R1, whereas we find no decreases and instead slight increases in both MT and R1. This may imply a U-shaped response of myelin to aging, which would fit with previous structural studies.

Our iron-sensitive map (R2*) somewhat reproduces their effects however, with significant increases in the hippocampus, posterior cingulate, caudate, and other dopamine-rich midbrain structures:


Effect of aging on R2*
Effect of aging on R2*

Wow! What really strikes me about this is that we can find age-related increases in a very young sample of mostly UCL students. Iron is already accumulating in the range from 18-39. For comparison, here are the key findings from Martina’s paper:


From Callaghan et al, 2014. Increasing iron in green, decreasing myelin in red.


The age effects in left hippocampus are particularly interesting as we found iron and myelination in this area related to these participant’s metacognitive ability, while controlling for age. Could this early life iron accumulation be a predictive biomarker for the possibility to develop neurodegenerative disease later in life? I think so. Large sample prospective imaging could really open up this question; does anyone know if UK Biobank will collect this kind of data? UK biobank will eventually contain ~200k scans with full medical workups and follow-ups. In a discussion with Karla Miller on facebook she mentioned there may be some low-resolution R2* images in that data. It could really be a big step forward to ask whether the first time-point predicts clinical outcome; ultimately early-life iron accumulation could be a key biomarker for neuro-degeneration.



  1. Gogtay, N. & Thompson, P. M. Mapping gray matter development: implications for typical development and vulnerability to psychopathology. Brain Cogn. 72, 6–15 (2010).
  2. Callaghan, M. F. et al. Widespread age-related differences in the human brain microstructure revealed by quantitative magnetic resonance imaging. Neurobiol. Aging 35, 1862–1872 (2014).
  3. Sala-Llonch, R., Bartrés-Faz, D. & Junqué, C. Reorganization of brain networks in aging: a review of functional connectivity studies. Front. Psychol. 6, 663 (2015).
  4. Sugiura, M. Functional neuroimaging of normal aging: Declining brain, adapting brain. Ageing Res. Rev. (2016). doi:10.1016/j.arr.2016.02.006
  5. Weiskopf, N., Mohammadi, S., Lutti, A. & Callaghan, M. F. Advances in MRI-based computational neuroanatomy: from morphometry to in-vivo histology. Curr. Opin. Neurol. 28, 313–322 (2015).
  6. Callaghan, M. F., Helms, G., Lutti, A., Mohammadi, S. & Weiskopf, N. A general linear relaxometry model of R1 using imaging data. Magn. Reson. Med. 73, 1309–1314 (2015).
  7. Mohammadi, S. et al. Whole-Brain In-vivo Measurements of the Axonal G-Ratio in a Group of 37 Healthy Volunteers. Front. Neurosci. 9, (2015).
  8. Carlson, E. S. et al. Iron Is Essential for Neuron Development and Memory Function in Mouse Hippocampus. J. Nutr. 139, 672–679 (2009).
  9. Georgieff, M. K. The role of iron in neurodevelopment: fetal iron deficiency and the developing hippocampus. Biochem. Soc. Trans. 36, 1267–1271 (2008).
  10. Castellani, R. J. et al. Iron: The Redox-active Center of Oxidative Stress in Alzheimer Disease. Neurochem. Res. 32, 1640–1645 (2007).
  11. Bartzokis, G. Alzheimer’s disease as homeostatic responses to age-related myelin breakdown. Neurobiol. Aging 32, 1341–1371 (2011).
  12. Gouw, A. A. et al. Heterogeneity of white matter hyperintensities in Alzheimer’s disease: post-mortem quantitative MRI and neuropathology. Brain 131, 3286–3298 (2008).
  13. Bartzokis, G. et al. MRI evaluation of brain iron in earlier- and later-onset Parkinson’s disease and normal subjects. Magn. Reson. Imaging 17, 213–222 (1999).
  14. Berg, D. et al. Brain iron pathways and their relevance to Parkinson’s disease. J. Neurochem. 79, 225–236 (2001).
  15. Dexter, D. T. et al. Increased Nigral Iron Content and Alterations in Other Metal Ions Occurring in Brain in Parkinson’s Disease. J. Neurochem. 52, 1830–1836 (1989).
  16. Jellinger, P. D. K., Paulus, W., Grundke-Iqbal, I., Riederer, P. & Youdim, M. B. H. Brain iron and ferritin in Parkinson’s and Alzheimer’s diseases. J. Neural Transm. – Park. Dis. Dement. Sect. 2, 327–340 (1990).


Featured Image -- 61004

In defence of preregistration

Psychbrief has a great rebuttal to a recent paper arguing against pre-registration. Go read it!


This post is a response to “Pre-Registration of Analysis of Experiments is Dangerous for Science” by Mel Slater (2016). Preregistration is stating what you’re going to do and how you’re going to do it before you collect data (for more detail, read this). Slater gives a few examples of hypothetical (but highly plausible) experiments and explains why preregistering the analyses of the studies (not preregistration of the studies themselves) would not have worked. I will reply to his comments and attempt to show why he is wrong.

Slater describes an experiment where they are conducting a between groups experimental design, with 2 conditions (experimental & control), 1 response variable, and no covariates. You find the expected result but it’s not exactly as you predicted. It turns out the result is totally explained by the gender of the participants (a variable you weren’t initially analysing but was balanced by chance). So…

View original post 1,149 more words

OKCupid Data Leak – Framing the Debate

You’ve probably heard by now that a ‘researcher’ by the name of Emil Kirkegaard released the sensitive data of 70,000 individuals from OKCupid on the Open Science framework. This is an egregious violation of research ethics and we’re already beginning to see mainstream media coverage of this unfolding story. I’ve been following this pretty closely as it involves my PhD alma mater Aarhus University. All I want to do here is collect relevant links and facts for those who may not be aware of the story. This debacle is likely going  become a key discussion piece in future debates over how to conduct open science. Jump to the bottom of this post for a live-updated collection of news coverage, blogs, and tweets as this issue unfolds.

Emil himself continues to fan flames by being totally unapologetic:

An open letter has been formed here, currently with the signatures of over 150 individuals (myself included) petitioning Aarhus University for a full statement and investigation of the issue:

Meanwhile Aarhus University has stated that Emil acted without oversight or any affiliation with AU, and that if he has claimed otherwise they intend to take (presumably legal) action:


I’m sure a lot more is going to be written as this story unfolds; the implications for open science are potentially huge. Already we’re seeing scientists wonder if this portends previously unappreciated risks of sharing data:

I just want to try and frame a few things. In the initial dust-up of this story there was a lot of confusion. I saw multiple accounts describing Emil as a “PI” (primary investigator), asking for his funding to be withdrawn, etc. At the time the details surrounding this was rather unclear. Now as more and more emerge it seems to paint a rather different picture, which is not being accurately portrayed so far in the media coverage:

Emil is not a ‘researcher’. He acted without any supervision or direct affiliation to AU. He is a masters student who claims on his website that he is ‘only enrolled at AU to collect SU [government funds])’. I’m seeing that most of the outlets describe this as ‘researchers release OKCupid data’. When considering the implications of this for open science and data sharing, we need to frame this as what it is: a group of hacktivists exploiting a security vulnerability under the guise of open science. NOT a university-backed research program.

What implications does this have for open science? From my perspective it looks like we need to discuss the role oversight and data protection. Ongoing twitter discussion suggests Emil violated EU data protection laws and the OKCupid terms of service. But other sources argue that this kind of scraping ‘attack’ is basically data-gathering 101 and that nearly any undergraduate with the right education could have done this. It seems like we need to have a conversation about our digital rights to data privacy, and whether those are doing enough to protect us. Doesn’t OKCupid itself hold some responsibility for allowing this data be access so easily? And what is the responsibility of the Open Science Foundation? Do we need to put stronger safeguards in place? Could an organization like anonymous, or even ISIS, ‘dox’ thousands of people and host the data there? These are extreme situations, but I think we need to frame them now before people walk away with the idea that this is an indictment of data sharing in general.

Below is a collection of tweets, blogs, and news coverage of the incident:


Brian Nosek on the Open Science Foundations Response:

More tweets on larger issues:


Emil has stated he is not acting on behalf of AU:


News coverage:





Here is a great example of how bad this is; Wired runs stury with headline ‘OKCupid study reveals perils of big data science:

OkCupid Study Reveals the Perils of Big-Data Science

This is not a study!  It is not ‘science’! At least not by any principle definition!


Here is a defense of Emil’s actions:


Thelma_Louise_cliff (1)

Is Frontiers in Trouble?

Lately it seems like the rising tide is going against Frontiers. Originally hailed as a revolutionary open-access publishing model, the publishing group has been subject to intense criticism in recent years. Recent issues include being placed on Beall’s controversial ‘predatory publisher list‘, multiple high profile disputes at the editorial level, and controversy over HIV and vaccine denialist articles published in the journal seemingly without peer review. As a proud author of two Frontiers articles and former frequent reviewer, these issues compounded with a general poor perception of the journal recently led me to stop all publication activities at Frontiers outlets. Although the official response from Frontiers to these issues has been mixed, yesterday a mass-email from a section editor caught my eye:

Dear Review Editors, Dear friends and colleagues,

As some of you may know, Prof. Philippe Schyns recently stepped down from his role as Specialty Chief Editor in Frontiersin Perception Science, and I have been given the honor and responsibility of succeeding him into this function. I wish to extend to him my thanks and appreciation for the hard work he has put in building this journal from the ground up. I will strive to continue his work and maintain Frontiers in Perception Science as one of the primary journals of the field. This task cannot be achieved without the support of a dynamic team of Associate Editors, Review Editors and Reviewers, and I am grateful for all your past, and hopefully future efforts in promoting the journal.

It am aware that many scientists in our community have grown disappointed or even defiant of the Frontiers publishing model in general, and Frontiers in Perception Science is no exception here. Among the foremost concerns are the initial annoyance and ensuing disinterest produced by the automated editor/reviewer invitation system and its spam-like messages, the apparent difficulty in rejecting inappropriate manuscripts, and (perhaps as a corollary), the poor reputation of the journal, a journal to which many authors still hesitate before submitting their work. I have experienced these troubles myself, and it was only after being thoroughly reassured by the Editorial office on most of these counts that I accepted to get involved as Specialty Chief Editor. Frontiers is revising their system, which will now leave more time for Associate Editors to mandate Review Editors before sending out automated invitations. When they occur, automated RE invitations will be targeted to the most relevant people (based on keyword descriptors), rather than broadcast to the entire board. This implies that it is very important for each of you to spend a few minutes editing the Expertise keywords on your Loop profile page. Most of these keywords were automatically collected within your publications, and they may not reflect your true area of expertise. Inappropriate expertise keywords are one of the main reasons why you receive inappropriate reviewing invitations! In the new Frontiers system, article rejection options will be made more visible to the handling Associate Editor. Although my explicit approval is still required for any manuscript rejection, I personally vow to stand behind all Associate Editors who will be compelled to reject poor-quality submissions. (While perceived impact cannot be used as a rejection criterion, poor research or writing quality and objective errors in design, analysis or interpretation can and should be used as valid causes for rejection). I hope that these measures will help limit the demands on the reviewers’ time, and contribute to advancing the standards and reputation of Frontiers in Perception Science. Each of you can also play a part in this effort by continuing to review articles that fall into your area of expertise, and by submitting your own work to the journal.

I look forward to working with all of you towards establishing Frontiers in Perception Science as a high-standard journal for our community.

It seems Frontiers is indeed aware of the problems and is hoping to bring back wary reviewers and authors. But is it too little too late? Discussing the problems at Frontiers is often met with severe criticism or outright dismissal by proponents of the OA publishing system, but I felt these neglected a wider negative perception of the publisher that has steadily grown over the past 5 years. To get a better handle on this I asked my twitter followers what they thought. 152 persons responded as follows:

As some of you requested control questions, here are a few for comparison:


That is a stark difference between the two top open access journals – whereas only 19% said there was no problem at Frontiers, a full 50% say there is no problem at PLOS ONE. I think we can see that even accounting for general science skepticism, opinions of Frontiers are particularly negative.

Sam Schwarzkopf also lent some additional data, comparing the whole field of major open access outlets – Frontiers again comes out poorly, although strangely so does F1000:

These data confirm what I had already feared: public perception among scientists (insofar as we can infer anything from such a poll) is lukewarm at best. Frontiers has a serious perception problem. Only 19% of 121 respondents were willing to outright say there was no problem at the journal. A full 45% said there was a serious problem, and 36% were unsure. Of course to fully evaluate these numbers, we’d like to know the baserate of similiar responses for other journals, but I cannot imagine any Frontiers author, reviewer, or editor feeling joy at these numbers – I certainly do not. Furthermore they reflect a widespread negativity I hear frequently from colleagues across the UK and Denmark.

What underlies this negative perception? As many proponents point out, Frontiers has been actually quite diligent at responding to user complaints. Controversial papers have been put immediately under review, overly spammy-review invitations and special issue invites largely ceased, and so on. I would argue the issue is not any one single mistake on the part of Frontiers leadership, but a growing history of errors contributing to a perception that the journal is following a profit-led ‘publish anything’ model. At times the journal feels totally automated, within little human care given to publishing and extremely high fees. What are some of the specific complaints I regularly hear from colleagues?

  • Spammy special issue invites. An older issue, but at Frontier’s inception many authors were inundated with constant invites to special issues, many of which were only tangentially related to author’s specialties.
  • Spammy review invites. Colleagues who signed on to be ‘Review Editors’ (basically repeat reviewers) reported being hit with as many as 10 requests to review in a month, again many without relevance to their interest
  • Related to both of the above, a perception that special issues and articles are frequently reviewed by close colleagues with little oversight. Similiarly, many special issues were edited by junior researchers at the PhD level.
  • Endless review. I’ve heard numerous complaints that even fundamentally flawed or unpublishable papers are impossible or difficult to reject. Reviewers report going through multiple rounds of charitable review, finding the paper only gets worse and worse, only to be removed from the review by editors and the paper published without them.

Again, Frontiers has responded to each of these issues in various ways. For example, Frontiers originally defended the special issues, saying that they were intended to give junior researchers an outlet to publish their ideas. Fair enough, and the spam issues have largely ceased. Still, I would argue it is the build up and repetition of these issues that has made authors and readers wary of the journal. This coupled with the high fees and feeling of automation leads to a perception that the outlet is mostly junk. This is a shame as there are certainly many high-value articles in Frontiers outlets. Nevertheless, academics are extremely bloodshy, and negative press creates a vicious feedback loop. If researchers feel Frontiers is a low-quality, spam-generating publisher who relies on overly automated processes, they are unlikely to submit their best work or review there. The quality of both drops, and the cycle intensifies.

For my part, I don’t intend to return to Frontiers unless they begin publishing reviews. I think this would go a long way to stemming many of these issues and encourage authors to judge individual articles on their own merits.

What do you think? What can be done to stem the tide? Please add your own thoughts, and stories of positive or negative experiences at Frontiers, in the comments.



A final comparison question




The Wild West of Publication Reform Is Now

It’s been a while since I’ve tried out my publication reform revolutionary hat (it comes in red!), but tonight as I was winding down I came across a post I simply could not resist. Titled “Post-publication peer review and the problem of privilege” by evolutionary ecologist Stephen Heard, the post argues that we should be cautious of post-publication review schemes insofar as they may bring about a new era of privilege in research consumption. Stephen writes:

“The packaging of papers into conventional journals, following pre-publication peer review, provides an important but under-recognized service: a signalling system that conveys information about quality and breath of relevance. I know, for instance, that I’ll be interested in almost any paper in The American Naturalist*. That the paper was judged (by peer reviewers and editors) suitable for that journal tells me two things: that it’s very good, and that it has broad implications beyond its particular topic (so I might want to read it even if it isn’t exactly in my own sub-sub-discipline). Take away that peer-review-provided signalling, and what’s left? A firehose of undifferentiated preprints, thousands of them, that are all equal candidates for my limited reading time (such that it exists). I can’t read them all (nobody can), so I have just two options: identify things to read by keyword alerts (which work only if very narrowly focused**), or identify them by author alerts. In other words, in the absence of other signals, I’ll read papers authored by people who I already know write interesting and important papers.”

In a nutshell, Stephen turns the entire argument for PPPR and publishing reform on its head. High impact[1] journals don’t represent elitism; rather they provide the no name rising young scientist a chance to have their work read and cited. This argument really made me pause for a second as it represents the polar opposite of almost my entire worldview on the scientific game and academic publishing. In my view, top-tier journals represent an entrenched system of elitism masquerading as meritocracy. They make arbitrary, journalistic decisions that exert intense power over career advancement. If anything the self-publication revolution represents the ability of a ‘nobody’ to shake the field with a powerful argument or study.

Needless to say I was at first shocked to see this argument supported by a number of other scientists on Twitter, who felt that it represented “everything wrong with the anti-journal rhetoric” spouted by loons such as myself. But then I remembered that in fact this is a version of an argument I hear almost weekly when similar discussions come up with colleagues. Ever since I wrote my pie-in-the sky self-publishing manifesto (don’t call it a manifesto!), I’ve been subjected (and rightly so!) to a kind of trial-by-peers as a de facto representative of the ‘revolution’. Most recently I was even cornered at a holiday party by a large and intimidating physicist who yelled at me that I was naïve and that “my system” would never work, for almost the exact reasons raised in Stephen’s post. So lets take a look at what these common worries are.

The Filter Problem

Bar none the first, most common complaint I hear when talking about various forms of publication reform is the “filter problem”. Stephen describes the fear quite succinctly; how will we ever find the stuff worth reading when the data deluge hits? How can we sort the wheat from the chaff, if journals don’t do it for us?

I used to take this problem seriously, and try to dream up all kinds of neato reddit-like schemes to solve it. But the truth is, it just represents a way of thinking that is rapidly becoming irrelevant. Journal based indexing isn’t a useful way to find papers. It is one signal in a sea of information and it isn’t at all clear what it actually represents. I feel like people who worry about the filter bubble tend to be more senior scientists who already struggle to keep up with the literature. For one thing, science is marked by an incessant march towards specialization. The notion that hundreds of people must read and cite our work for it to be meaningful is largely poppycock. The average paper is mostly technical, incremental, and obvious in nature. This is absolutely fine and necessary – not everything can be ground breaking and even the breakthroughs must be vetted in projects that are by definition less so. For the average paper then, being regularly cited by 20-50 people is damn good and likely represents the total target audience in that topic area. If you network to those people using social media and traditional conferences, it really isn’t hard to get your paper in their hands.

Moreover, the truly ground breaking stuff will find its audience no matter where it is published. We solve the filter problem every single day, by publically sharing and discussing papers that interest us. Arguing that we need journals to solve this problem ignores the fact that they obscure good papers behind meaningless brands, and more importantly, that scientists are perfectly capable of identifying excellent papers from content alone. You can smell a relevant paper from a mile away – regardless of where it is published! We don’t need to wait for some pie in the sky centralised service to solve this ‘problem’ (although someday once the dust settles i’m sure such things will be useful). Just go out and read some papers that interest you! Follow some interesting people on twitter. Develop a professional network worth having! And don’t buy into the idea that the whole world must read your paper for it to be worth it.

The Privilege Problem 

Ok, so lets say you agree with me to this point. Using some combination of email, social media, alerts, and RSS you feel fully capable of finding relevant stuff for your research (I do!). But your worried about this brave new world where people archive any old rubbish they like and embittered post-docs descend to sneer gleefully at it from the dark recesses of pubpeer. Won’t the new system be subject to favouritism, cults of personality, and the privilege of the elite? As Stephen says, isn’t it likely that popular persons will have their papers reviewed and promoted and all the rest will fade to the back?

The answer is yes and no. As I’ve said many times, there is no utopia. We can and must fight for a better system, but cheaters will always find away[2]. No matter how much transparency and rigor we implement, someone is going to find a loophole. And the oldest of all loopholes is good old human-corruption and hero worship. I’ve personally advocated for a day when data, code, and interpretation are all separate publishable, citable items that each contribute to ones CV. In this brave new world PPPRs would be performed by ‘review cliques’ who build up their reputation as reliable reviewers by consistently giving high marks to science objects that go on to garner acclaim, are rarely retracted, and perform well on various meta-analytic robustness indices (reproducibility, transparency, documentation, novelty, etc). They won’t replace or supplant pre-publication peer review. Rather we can ‘let a million flowers bloom’. I am all for a continuum of rigor, ranging from preregistered, confirmatory research with pre and post peer review, to fully exploratory, data driven science that is simply uploaded to a repository with a ‘use at your peril’ warning’. We don’t need to pit one reform tool against another; the brave new world will be a hybrid mixture of every tool we have at our disposal. Such a system would be massively transparent, but of course not perfect. We’d gain a cornucopia of new metrics by which to weight and reward scientists, but assuredly some clever folks would benefit more than others. We need to be ready when that day comes, aware of whatever pitfalls may bely our brave new science.

Welcome to the Wild West

Honestly though, all this kind of talk is just pointless. We all have our own opinions of what will be the best way to do science, or what will happen. For my own part I am sure some version of this sci-fi depiction is inevitable. But it doesn’t matter because the revolution is here, it’s now, it’s changing the way we consume and produce science right before our very eyes. Every day a new preprint lands on twitter with a massive splash. Just last week in my own field of cognitive neuroscience a preprint on problems in cluster inference for fMRI rocked the field, threatening to undermine thousands of existing papers while generating heated discussion in the majority of labs around the world. The week before that #cingulategate erupted when PNAS published a paper which was met with instant outcry and roundly debunked by an incredibly series of thorough post-publication reviews. A multitude of high-profile fraud cases have been exposed, and careers ended, via anonymous comments on pubpeer. People are out there, right now finding and sharing papers, discussing the ones that matter, and arguing about the ones that don’t. The future is now and we have almost no idea what shape it is taking, who the players are, or what it means for the future of funding and training. We need to stop acting like this is some fantasy future 10 years from now; we have entered the wild west and it is time to discuss what that means for science.

Authors note: In case it isn’t clear, i’m quite glad that Stephen raised the important issue of privilege. I am sure that there are problems to be rooted out and discussed along these lines, particularly in terms of the way PPPR and filtering is accomplished now in our wild west. What I object to is the idea that the future will look like it does now; we must imagine a future where science is radically improved!

[1] I’m not sure if Stephen meant high impact as I don’t know the IF of American Naturalist, maybe he just meant ‘journals I like’.

[2] Honestly this is where we need to discuss changing the hyper-capitalist system of funding and incentives surrounding publication but that is another post entirely! Maybe people wouldn’t cheat so much if we didn’t pit them against a thousand other scientists in a no-holds-barred cage match to the death.


Predictive coding and how the dynamical Bayesian brain achieves specialization and integration

Authors note: this marks the first in a new series of journal-entry style posts in which I write freely about things I like to think about. The style is meant to be informal and off the cuff, building towards a sort of socratic dialogue. Please feel free to argue or debate any point you like. These are meant to serve as exercises in writing and thinking,  to improve the quality of both and lay groundwork for future papers. 

My wife Francesca and I are spending the winter holidays vacationing in the north Italian countryside with her family. Today in our free time our discussions turned to how predictive coding and generative models can accomplish the multimodal perception that characterizes the brain. To this end Francesca asked a question we found particularly thought provoking: if the brain at all levels is only communicating forward what is not predicted (prediction error), how can you explain the functional specialization that characterizes the different senses? For example, if each sensory hierarchy is only communicating prediction errors, what explains their unique specialization in terms of e.g. the frequency, intensity, or quality of sensory inputs? Put another way, how can the different sensations be represented, if the entire brain is only communicating in one format?

We found this quite interesting, as it seems straightforward and yet the answer lies at the very basis of predictive coding schemes. To arrive at an answer we first had to lay a little groundwork in terms of information theory and basic neurobiology. What follows is a grossly oversimplified account of the basic neurobiology of perception, which serves only as a kind of philosopher’s toy example to consider the question. Please feel free to correct any gross misunderstandings.

To begin, it is clear at least according to Shannon’s theory of information, that any sensory property can be encoded in a simple system of ones and zeros (or nerve impulses). Frequency, time, intensity, and so on can all be re-described in terms of a simplistic encoding scheme. If this were not the case then modern television wouldn’t work. Second, each sensory hierarchy presumably  begins with a sensory effector, which directly transduces physical fluctuations into a neuronal code. For example, in the auditory hierarchy the cochlea contains small hairs that vibrate only to a particular frequency of sound wave. This vibration, through a complex neuro-mechanic relay, results in a tonitopic depolarization of first order neurons in the spiral ganglion.

The human cochlea, a fascinating neural-mechanic apparatus to directly transduce air vibrations into neural representations.

It is here at the first-order neuron where the hierarchy presumably begins, and also where functional specialization becomes possible. It seems to us that predictive coding should say that the first neuron is simply predicting a particular pattern of inputs, which correspond directly to an expected external physical property. To try and give a toy example, say we present the brain with a series of tones, which reliably increase in frequency at 1 Hz intervals. At the lowest level the neuron will fire at a constant rate if the frequency at interval n is 1 greater than the previous interval, and will fire more or less if the frequency is greater or less than this basic expectation, creating a positive or negative prediction error (remember that the neuron should only alter its firing pattern if something unexpected happens). Since frequency here is being signaled directly by the mechanical vibration of the cochlear hairs; the first order neuron is simply predicting which frequency will be signaled. More realistically, each sensory neuron is probably only predicting whether or not a particular frequency will be signaled – we know from neurobiology that low-level neurons are basically tuned to a particular sensory feature, whereas higher level neurons encode receptive fields across multiple neurons or features. All this is to say that the first-order neuron is specialized for frequency because all it can predict is frequency; the only afferent input is the direct result of sensory transduction. The point here is that specialization in each sensory system arises in virtue of the fact that the inputs correspond directly to a physical property.

Presumably, first order neurons predict the presence or absence of a particular, specialized sensory feature owing to their input. Credit: wikipedia.

Now, as one ascends higher in the hierarchy, each subsequent level is predicting the activity of the previous. The first-order neuron predicts whether a given frequency is presented, the second perhaps predicts if a receptive field is activated across several similarly tuned neurons, the third predicts a particular temporal pattern across multiple receptive fields, and so on. Each subsequent level is predicting a “hyperprior” encoding a higher order feature of the previous level. Eventually we get to a level where the prediction is no longer bound to a single sensory domain, but instead has to do with complex, non-linear interactions between multiple features. A parietal neuron thus might predict that an object in the world is a bird if it sings at a particular frequency and has a particular bodily shape.

The motif of hierarchical message passing which encompasses the nervous system, according the the Free Energy principle.

If this general scheme is correct, then according to hierarchical predictive coding functional specialization primarily arises in virtue of the fact that at the lowest level each hierarchy is receiving inputs that strictly correspond to a particular feature. The cochlea is picking up fluctuations in air vibration (sound), the retina is picking up fluctuations in light frequency (light), and the skin is picking up changes in thermal amplitude and tactile frequency (touch). The specialization of each system is due to the fact that each is attempting to predict higher and higher order properties of those low-level inputs, which are by definition particular to a given sensory domain. Any further specialization in the hierarchy must then arise from the fact that higher levels of the brain predict inputs from multiple sensory systems – we might find multimodal object-related areas simply because the best hyper-prior governing nonlinear relationships between frequency and shape is an amodal or cross-model object. The actual etiology of higher-level modules is a bit more complicate than this, and requires an appeal to evolution to explain in detail, but we felt this was a generally sufficient explanation of specialization.

Nonlinearity of the world and perception: prediction as integration

At this point, we felt like we had some insight into how predictive coding can explain functional specialization without needing to appeal to special classes of cortical neurons for each sensation. Beyond the sensory effectors, the function of each system can be realized simply by means of a canonical, hierarchical prediction of each layered input, right down to the point of neurons which predict which frequency will be signaled. However, something still was missing, prompting Francesca to ask – how can this scheme explain the coherent, multi-modal, integrated perception, which characterizes conscious experience?

Indeed, we certainly do not experience perception as a series of nested predictions. All of the aforementioned machinery functions seamlessly beyond the point of awareness. In phenomenology a way to describe such influences is as being prenoetic (before knowing; see also prereflective); i.e. things that influence conscious experience without themselves appearing in experience. How then can predictive coding explain the transition from segregated, feature specific predictions to the unified percept we experience?

When we arrange sensory hierarchies laterally, we see the “markov blanket” structure of the brain emerge. Each level predicts the control parameters of subsequent levels. In this way integration arises naturally from the predictive brain.

As you might guess, we already hinted at part of the answer. Imagine if instead of picturing each sensory hierarchy as an isolated pyramid, we instead arrange them such that each level is parallel to its equivalent in the ‘neighboring’ hierarchy. On this view, we can see that relatively early in each hierarchy you arrive at multi-sensory neurons that are predicting conjoint expectations over multiple sensory inputs. Conveniently, this observation matches what we actually know about the brain; audition, touch, and vision all converge in tempo-parietal association areas.

Perceptual integration is thus achieved as easily as specialization; it arises from the fact that each level predicts a hyperprior on the previous level. As one moves upwards through the hierarchy, this means that each level predicts more integrated, abstract, amodal entities. Association areas don’t predict just that a certain sight or sound will appear, but instead encode a joint expectation across both (or all) modalities. Just like the fusiform face area predicts complex, nonlinear conjunctions of lower-level visual features, multimodal areas predict nonlinear interactions between the senses.

A half-cat half post, or a cat behind a post? The deep convolutional nature of the brain helps us solve this and similar nonlinear problems.

It is this nonlinearity that makes predictive schemes so powerful and attractive. To understand why, consider the task the brain must solve to be useful. Sensory impressions are not generated by simple linear inputs; certainly for perception to be useful to an organism it must process the world at a level that is relevant for that organism. This is the world of objects, persons, and things, not disjointed, individual sensory properties. When I watch a cat walk behind a fence, I don’t perceive it as two halves of a cat and a fence post, but rather as a cat hidden behind a fence. These kinds of nonlinear interactions between objects and properties of the world are ubiquitous in perception; the brain must solve not for the immediately available sensory inputs but rather the complex hidden causes underlying them. This is achieved in a similar manner to a deep convolutional network; each level performs the same canonical prediction, yet together the hierarchy will extract the best-hidden features to explain the complex interactions that produce physical sensations. In this way the predictive brain summersaults the binding problem of perception; perception is integrated precisely because conjoint hypothesis are better, more useful explanations than discrete ones. As long as the network has sufficient hierarchical depth, it will always arrive at these complex representations. It’s worth noting we can observe the flip-side of this process in common visual illusions, where the higher-order percept or prior “fills in” our actual sensory experience (e.g. when we perceive a convex circle as being lit from above).

Our higher-level, integrative priors “fill in” our perception.

Beating the homunculus: the dynamic, enactive Bayesian brain

Feeling satisfied with this, Francesca and I concluded our fun holiday discussion by thinking about some common misunderstandings this scheme might lead one into. For example, the notion of hierarchical prediction explored above might lead one to expect that there has to be a “top” level, a kind of super-homunculus who sits in the prefrontal cortex, predicting the entire sensorium. This would be an impossible solution; how could any subsystem of the brain possibly predict the entire activity of the rest? And wouldn’t that level itself need to be predicted, to be realised in perception, leading to infinite regress? Luckily the intuition that these myriad hypotheses must “come together” fundamentally misunderstands the Bayesian brain.

Remember that each level is only predicting the activity of that before it. The integrative parietal neuron is not predicting the exact sensory input at the retina; rather it is only predicting what pattern of inputs it should receive if the sensory input is an apple, or a bat, or whatever. The entire scheme is linked up this way; the individual units are just stupid predictors of immediate input. It is only when you link them all up together in a deep network, that the brain can recapitulate the complex web of causal interactions that make up the world.

This point cannot be stressed enough: predictive coding is not a localizationist enterprise. Perception does not come about because a magical brain area inverts an entire world model. It comes about in virtue of the distributed, dynamic activity of the entire brain as it constantly attempts to minimize prediction error across all levels. Ultimately the “model” is not contained “anywhere” in the brain; the entire brain itself, and the full network of connection weights, is itself the model of the world. The power to predict complex nonlinear sensory causes arises because the best overall pattern of interactions will be that which most accurately (or usefully) explains sensory inputs and the complex web of interactions which causes them. You might rephrase the famous saying as “the brain is it’s own best model of the world”.

As a final consideration, it is worth noting some misconceptions may arise from the way we ourselves perform Bayesian statistics. As an experimenter, I formalize a discrete hypothesis (or set of hypotheses) about something and then invert that model to explain data in a single step. In the brain however the “inversion” is just the constant interplay of input and feedback across the nervous system at all levels. In fact, under this distributed view (at least according to the Free Energy Principle), neural computation is deeply embodied, as actions themselves complete the inferential flow to minimize error. Thus just like neural feedback, actions function as  ‘predictions’, generated by the inferential mechanism to render the world more sensible to our predictions. This ultimately minimises prediction error just as internal model updates do, albeit in a different ‘direction of fit’ (world to model, instead of model to world). In this way the ‘model’ is distributed across the brain and body; actions themselves are as much a part of the computation as the brain itself and constitute a form of “active inference”. In fact, if one extends their view to evolution, the morphological shape of the organism is itself a kind of prior, predicting the kinds of sensations, environments, and actions the agent is likely to inhabit. This intriguing idea will be the subject of a future blog post.


We feel this is an extremely exciting view of the brain. The idea that an organism can achieve complex intelligence simply by embedding a simple repetitive motif within a dynamical body seems to us to be a fundamentally novel approach to the mind. In future posts and papers, we hope to further explore the notions introduced here, considering questions about “where” these embodied priors come from and what they mean for the brain, as well as the role of precision in integration.

Questions? Comments? Feel like i’m an idiot? Sound off in the comments!

Further Reading:

Brown, H., Adams, R. A., Parees, I., Edwards, M., & Friston, K. (2013). Active inference, sensory attenuation and illusions. Cognitive Processing, 14(4), 411–427.
Feldman, H., & Friston, K. J. (2010). Attention, Uncertainty, and Free-Energy. Frontiers in Human Neuroscience, 4.
Friston, K., Adams, R. A., Perrinet, L., & Breakspear, M. (2012). Perceptions as Hypotheses: Saccades as Experiments. Frontiers in Psychology, 3.
Friston, K., & Kiebel, S. (2009). Predictive coding under the free-energy principle. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 364(1521), 1211–1221.
Friston, K., Thornton, C., & Clark, A. (2012). Free-Energy Minimization and the Dark-Room Problem. Frontiers in Psychology, 3.
Moran, R. J., Campo, P., Symmonds, M., Stephan, K. E., Dolan, R. J., & Friston, K. J. (2013). Free Energy, Precision and Learning: The Role of Cholinergic Neuromodulation. The Journal of Neuroscience, 33(19), 8227–8236.