low noise

Unexpected arousal shapes confidence – blog and news coverage

For those looking for a good summary of our recent publication, several outlets gave us solid coverage for expert and non-expert alike. Here is a short summary of the most useful write-ups:

The eLife digest itself was excellent – make sure to fill out the survey at the end to let eLife know what you think of the digests  (I love them).

via Arousing confidence – Brains and Behaviour – Medium

As you read the words on this page, you might also notice a growing feeling of confidence that you understand their meaning. Every day we make decisions based on ambiguous information and in response to factors over which we have little or no control. Yet rather than being constantly paralysed by doubt, we generally feel reasonably confident about our choices. So where does this feeling of confidence come from?

Computational models of human decision-making assume that our confidence depends on the quality of the information available to us: the less ambiguous this information, the more confident we should feel. According to this idea, the information on which we base our decisions is also the information that determines how confident we are that those decisions are correct. However, recent experiments suggest that this is not the whole story. Instead, our internal states — specifically how our heart is beating and how alert we are — may influence our confidence in our decisions without affecting the decisions themselves.

To test this possibility, Micah Allen and co-workers asked volunteers to decide whether dots on a screen were moving to the left or to the right, and to indicate how confident they were in their choice. As the task became objectively more difficult, the volunteers became less confident about their decisions. However, increasing the volunteers’ alertness or “arousal” levels immediately before a trial countered this effect, showing that task difficulty is not the only factor that determines confidence. Measures of arousal — specifically heart rate and pupil dilation — were also related to how confident the volunteers felt on each trial. These results suggest that unconscious processes might exert a subtle influence on our conscious, reflective decisions, independently of the accuracy of the decisions themselves.

The next step will be to develop more refined mathematical models of perception and decision-making to quantify the exact impact of arousal and other bodily sensations on confidence. The results may also be relevant to understanding clinical disorders, such as anxiety and depression, where changes in arousal might lock sufferers into an unrealistically certain or uncertain world.

The PNAS journal club also published a useful summary, including some great quotes from Phil Corlett and Rebecca Todd:

via Journal Club: How your body feels influences your confidence levels | National Academy of Sciences

… Allen’s findings are “relevant to anyone whose job is to make difficult perceptual judgments trying to see signal in a lot of noise,” such as radiologists or baggage inspectors, says cognitive neuroscientist Rebecca Todd at the University of British Columbia in Vancouver, who did not take part in the research. Todd suggests that people who apply decision-making models to real world problems need to better account for the influence of internal or emotional states on confidence.

The fact that bodily states can influence confidence may even shed light on mental disorders, which often involve blunted or heightened signals from the body. Symptoms could result from how changes in sensory input affect perceptual decision-making, says cognitive neuroscientist and schizophrenia researcher Phil Corlett at Yale University, who did not participate in this study.

Corlett notes that some of the same ion channels involved in regulating heart rate are implicated in schizophrenia as well. “Maybe boosting heart rate might lead people with schizophrenia to see or hear things that aren’t present,” he speculates, adding that future work could analyze how people with mental disorders perform on these tasks…

I also wrote a blog post summarizing the article for The Conversation:

via How subtle changes in our bodies affect conscious awareness and decision confidence

How do we become aware of our own thoughts and feelings? And what enables us to know when we’ve made a good or bad decision? Every day we are confronted with ambiguous situations. If we want to learn from our mistakes, it is important that we sometimes reflect on our decisions. Did I make the right choice when I leveraged my house mortgage against the market? Was that stop light green or red? Did I really hear a footstep in the attic, or was it just the wind?

When events are more uncertain, for example if our windscreen fogs up while driving, we are typically less confident in what we’ve seen or decided. This ability to consciously examine our own experiences, sometimes called introspection, is thought to depend on the brain appraising how reliable or “noisy” the information driving those experiences is. Some scientists and philosophers believe that this capacity for introspection is a necessary feature of consciousness itself, forging the crucial link between sensation and awareness.

One important theory is that the brain acts as a kind of statistician, weighting options by their reliability, to produce a feeling of confidence more or less in line with what we’ve actually seen, felt or done. And although this theory does a reasonably good job of explaining our confidence in a variety of settings, it neglects an important fact about our brains – they are situated within our bodies. Even now, as you read the words on this page, you might have some passing awareness of how your socks sit on your feet, how fast your heart is beating or if the room is the right temperature.

Even if you were not fully aware of these things, the body is always shaping how we experience ourselves and the world around us. That is to say experience is always from somewhere, embodied within a particular perspective. Indeed, recent research suggests that our conscious awareness of the world is very much dependent on exactly these kinds of internal bodily states. But what about confidence? Is it possible that when I reflect on what I’ve just seen or felt, my body is acting behind the scenes? …

The New Scientist took an interesting angle not as explored in the other write-ups, and also included a good response from Ariel Zylberberg:

via A bit of disgust can change how confident you feel | New Scientist

“We were tricking the brain and changing the body in a way that had nothing to do with the task,” Allen says. In doing so, they showed that a person’s sense of confidence relies on internal as well as external signals – and the balance can be shifted by increasing your alertness.

Allen thinks the reaction to disgust suppressed the “noise” created by the more varied movement of the dots during the more difficult versions of the task. “They’re taking their own confidence as a cue and ignoring the stimulus in the world.”

“It’s surprising that they show that confidence can be motivated by processes inside a person, instead of what we tend to believe, which is that confidence should be motivated by external things that affect a decision,” says Ariel Zylberberg at Columbia University in New York. “Disgust leads to aversion. If you try a food and it’s disgusting, you walk away from it,” says Zylberberg. “Here, if you induce disgust, high confidence becomes lower and low confidence becomes higher. It could be that disgust is generating this repulsion.”

It is not clear whether it is the feeling of disgust that changes a person’s confidence in this way, or whether inducing alertness with a different emotion, such as anger or fear, would have the same effect.

You can find all the coverage for our article using these excellent services, altmetric & ImpactStory.

https://www.altmetric.com/details/12986857

https://impactstory.org/u/0000-0001-9399-4179/p/mfatd6ZhpW

Thanks to everyone who shared, enjoyed, and interacted with our research!

 

Spoutwood drum circle_Painting QueenBlog

fMRI study of Shamans tripping out to phat drumbeats

Every now and then, i’m browsing RSS on the tube commute and come across a study that makes me laugh out loud. This of course results in me receiving lots of ‘tuts’ from my co-commuters. Anyhow, the latest such entry to the world of cognitive neuroscience is a study examining brain response to drum beats in shamanic practitioners. Michael Hove and colleagues of the Max Planck Institute in Leipzig set out to study “Perceptual Decoupling During an Absorptive State of Consciousness” using functional magnetic resonance imaging (fMRI). What exactly does that mean? Apparently: looking at how brain connectivity in ‘experienced shamanic practitioners’ changes when they listen to  rhythmic drumming. Hove and colleagues explain that across a variety of cultures, ‘quasi-isochronous drumming’ is used to induce ‘trance states’. If you’ve ever been dancing around a drum circle in the full moon light, or tranced out to shpongle in your living room, I guess you get the feeling right?

Anyway, Hove et al recruited 15 participants who were trained in  “core shamanism,” described as:

“a system of techniques developed and codified by Michael Harner (1990) based on cross-cultural commonalities among shamanic traditions. Participants were recruited through the German-language newsletter of the Foundation of Shamanic Studies and by word of mouth.”

They then played these participants rhythmic isochronous drumming (trance condition) versus drumming with a more regular timing. In what might be the greatest use of a Likert scale of all time, Participants rated if [they] “would describe your experience as a deep shamanic journey?” (1 = not at all; 7 = very much so)”, and indeed described the trance condition as more well, trancey. Hove and colleagues then used a fairly standard connectivity analysis, examining eigenvector centrality differences between the two drumming conditions, as well as seed-based functional connectivity:

trance.PNG

seed.PNG

Hove et al report that compared to the non-trance conditions, the posterior/dorsal cingulate, insula, and auditory brainstem regions become more ‘hublike’, as indicated by a higher overall degree centrality of these regions. Further, they experienced stronger functionally connectivity with the posterior cingulate cortex. I’ll let Hove and colleagues explain what to make of this:

“In sum, shamanic trance involved cooperation of brain networks associated with internal thought and cognitive control, as well as a dampening of sensory processing. This network configuration could enable an extended internal train of thought wherein integration and moments of insight can occur. Previous neuroscience work on trance is scant, but these results indicate that successful induction of a shamanic trance involves a reconfiguration of connectivity between brain regions that is consistent across individuals and thus cannot be dismissed as an empty ritual.”

Ultimately the authors conclusion seems to be that these brain connectivity differences show that, if nothing else, something must be ‘really going on’ in shamanic states. To be honest, i’m not really sure anyone disagreed with that to begin with. Collectively I can’t critique this study without thinking of early (and ongoing) meditation research, where esoteric monks are placed in scanners to show that ‘something really is going on’ in meditation. This argument to me seems to rely on a folk-psychological misunderstanding of how the brain works. Even in placebo conditioning, a typical example of a ‘mental effect’, we know of course that changes in the brain are responsible. Every experience (regardless how complex) has some neural correlate. The trick is to relate these neural factors to behavioral ones in a way that actually advances our understanding of the mechanisms and experiences that generate them. The difficulty with these kinds of studies is that all we can do is perform reverse inference to try and interpret what is going on; the authors conclusion about changes in sensory processing is a clear example of this. What do changes in brain activity actually tell us about trance (and other esoteric) states ? Certainly they don’t reveal any particular mechanism or phenomenological quality, without being coupled to some meaningful understanding of the states themselves. As a clear example, we’re surely pushing reductionism to its limit by asking participants to rate a self-described transcendent state using a unidirectional likert scale? The authors do cite Francisco Varela (a pioneer of neurophenemonological methods), but don’t seem to further consider these limitations or possible future directions.

Overall, I don’t want to seem overly critical of this amusing study. Certainly shamanic traditions are a deeply important part of human cultural history, and understanding how they impact us emotionally, cognitively, and neurologically is a valuable goal. For what amounts to a small pilot study, the protocols seem fairly standard from a neuroscience standpoint. I’m less certain about who these ‘shamans’ actually are, in terms of what their practice actually constitutes, or how to think about the supposed ‘trance states’, but I suppose ‘something interesting’ was definitely going on. The trick is knowing exactly what that ‘something’ is.

Future studies might thus benefit from a better direct characterization of esoteric states and the cultural practices that generate them, perhaps through collaboration with an anthropologist and/or the application of phenemonological and psychophysical methods. For now however, i’ll just have to head to my local drum circle and vibe out the answers to these questions.

Hove MJ, Stelzer J, Nierhaus T, Thiel SD, Gundlach C, Margulies DS, Van Dijk KRA, Turner R, Keller PE, Merker B (2016) Brain Network Reconfiguration and Perceptual Decoupling During an Absorptive State of Consciousness. Cerebral Cortex 26:3116–3124.

 

1-s2.0-S0197458014002000-gr2

Mapping the effects of age on brain iron, myelination, and macromolecules – with data!

The structure, function, and connectivity of the brain changes considerably as we age1–4. Recent advances in MRI physics and neuroimaging have led to the development of new techniques which allow researchers to map quantitative parameters sensitive to key histological brain factors such as iron and myelination5–7. These quantitative techniques reveal the microstructure of the brain by leveraging our knowledge about how different tissue types respond to specialized MRI-sequences, in a fashion similar to diffusion-tensor imaging, combined with biophysical modelling. Here at the Wellcome Trust Centre for Neuroimaging, our physicists and methods specialists have teamed up to push these methods to their limit, delivering sub-millimetre, whole-brain acquisition techniques that can be completed in less than 30 minutes. By combining advanced biophysical modelling with specialized image co-registration, segmentation, and normalization routines in a process known as ‘voxel-based quantification’ (VBQ), these methods allow us to image key markers of histological brain factors. Here is a quick description of the method from a primer at our centre’s website:

Anatomical MR imaging has not only become a cornerstone in clinical diagnosis but also in neuroscience research. The great majority of anatomical studies rely on T1-weighted images for morphometric analysis of local gray matter volume using voxel-based morphometry (VBM). VBM provides insight into macroscopic volume changes that may highlight differences between groups; be associated with pathology or be indicative of plasticity. A complimentary approach that has sensitivity to tissue microstructure is high resolution quantitative imaging. Whereas in T1-weighted images the signal intensity is in arbitrary units and cannot be compared across sites or even scanning sessions, quantitative imaging can provide neuroimaging biomarkers for myelination, water and iron levels that are absolute measures comparable across imaging sites and time points.

These biomarkers are particularly important for understanding aging, development, and neurodegeneration throughout the lifespan. Iron in particular is critical for the healthy development and maintenance of neurons, where it is used to drive ATP in glial support cells to create and maintain the myelin sheaths that are critical for neural function. Nutritional iron deficiency during foetal, childhood, or even adolescent development is linked to impaired memory and learning, and altered hippocampal function and structure8,9. Although iron homeostasis in the brain is hugely complex and poorly understood, we know that run-away iron in the brain is a key factor in degenerative diseases like Alzheimer’s and Parkinson’s10–16. Data from both neuroimaging and post-mortem studies indicate that brain iron increases throughout the lifespan, particular in structures rich in neuromelanin such as the basal ganglia, caudate, and hippocampus. In Alzheimer’s and Parkinson’s for example, it is thought that runaway iron in these structures eventually overwhelms the glial systems responsible for chelating (processing) iron, and as iron becomes neurotoxic at excessive levels, leading to a runaway chain of neural atrophy throughout the brain. Although we don’t know how this process begins (scientist believe factors including stress and disease-related neuroinflammation, normal aging processes, and genetics all probably contribute), understanding how iron and myelination change over the lifespan is a crucial step towards understanding these diseases. Furthermore, because VBQ provides quantitative markers, data can be pooled and compared across research centres.

Recently I’ve been doing a lot of work with VBQ, examining for example how individual differences in metacognition and empathy relate to brain microstructure. One thing we were interested in doing with our data was examining if we could follow-up on previous work from our centre showing wide-spread age-related changes in iron and myelination. This was a pretty easy analysis to do using our 59 subjects, so I quickly put together a standard multiple regression model including age, gender, and total intracranial volume. Below are the maps for magnetization transfer (MT),  longitudinal  relaxation  rate (R1),  and  effective  transverse relaxation rate (R2*), which measure brain macromolecules/water, myelination, and iron respectively (click each image to see explore the map in neurovault!). All maps are FWE-cluster corrected, adjusting for non-sphericity, at a p < 0.001 inclusion threshold.

 

Effect of aging on MT
Effect of aging on MT

You can see that there is increased MT throughout the brain, particularly in the amygdala, post central gyrus, thalamus, and other midbrain and prefrontal areas. MT (roughly) measures water in the brain, and is mostly sensitive to myelination and macromolecules such as microglia and astrocytes. Interestingly our findings here contrast to Callaghan et al (2014), who found decreases in myelination whereas we find increases. This is probably explained by differences in our samples.

 

Effect of aging on R1
Effect of aging on R1

R1 shows much more restricted effects, with increased R1 only in the left post-central gyrus, at least in this sample. This is in contrast to Callaghan et al2  who found extensive negative MT & R1 effects, but that was in a much larger sample and with a much wider age-related variation (19-75, mean = 45). Interestingly, Martina and colleagues actually reported widespread decreases in R1, whereas we find no decreases and instead slight increases in both MT and R1. This may imply a U-shaped response of myelin to aging, which would fit with previous structural studies.

Our iron-sensitive map (R2*) somewhat reproduces their effects however, with significant increases in the hippocampus, posterior cingulate, caudate, and other dopamine-rich midbrain structures:

 

Effect of aging on R2*
Effect of aging on R2*

Wow! What really strikes me about this is that we can find age-related increases in a very young sample of mostly UCL students. Iron is already accumulating in the range from 18-39. For comparison, here are the key findings from Martina’s paper:

 

1-s2.0-S0197458014002000-gr2
From Callaghan et al, 2014. Increasing iron in green, decreasing myelin in red.

 

The age effects in left hippocampus are particularly interesting as we found iron and myelination in this area related to these participant’s metacognitive ability, while controlling for age. Could this early life iron accumulation be a predictive biomarker for the possibility to develop neurodegenerative disease later in life? I think so. Large sample prospective imaging could really open up this question; does anyone know if UK Biobank will collect this kind of data? UK biobank will eventually contain ~200k scans with full medical workups and follow-ups. In a discussion with Karla Miller on facebook she mentioned there may be some low-resolution R2* images in that data. It could really be a big step forward to ask whether the first time-point predicts clinical outcome; ultimately early-life iron accumulation could be a key biomarker for neuro-degeneration.

 


References

  1. Gogtay, N. & Thompson, P. M. Mapping gray matter development: implications for typical development and vulnerability to psychopathology. Brain Cogn. 72, 6–15 (2010).
  2. Callaghan, M. F. et al. Widespread age-related differences in the human brain microstructure revealed by quantitative magnetic resonance imaging. Neurobiol. Aging 35, 1862–1872 (2014).
  3. Sala-Llonch, R., Bartrés-Faz, D. & Junqué, C. Reorganization of brain networks in aging: a review of functional connectivity studies. Front. Psychol. 6, 663 (2015).
  4. Sugiura, M. Functional neuroimaging of normal aging: Declining brain, adapting brain. Ageing Res. Rev. (2016). doi:10.1016/j.arr.2016.02.006
  5. Weiskopf, N., Mohammadi, S., Lutti, A. & Callaghan, M. F. Advances in MRI-based computational neuroanatomy: from morphometry to in-vivo histology. Curr. Opin. Neurol. 28, 313–322 (2015).
  6. Callaghan, M. F., Helms, G., Lutti, A., Mohammadi, S. & Weiskopf, N. A general linear relaxometry model of R1 using imaging data. Magn. Reson. Med. 73, 1309–1314 (2015).
  7. Mohammadi, S. et al. Whole-Brain In-vivo Measurements of the Axonal G-Ratio in a Group of 37 Healthy Volunteers. Front. Neurosci. 9, (2015).
  8. Carlson, E. S. et al. Iron Is Essential for Neuron Development and Memory Function in Mouse Hippocampus. J. Nutr. 139, 672–679 (2009).
  9. Georgieff, M. K. The role of iron in neurodevelopment: fetal iron deficiency and the developing hippocampus. Biochem. Soc. Trans. 36, 1267–1271 (2008).
  10. Castellani, R. J. et al. Iron: The Redox-active Center of Oxidative Stress in Alzheimer Disease. Neurochem. Res. 32, 1640–1645 (2007).
  11. Bartzokis, G. Alzheimer’s disease as homeostatic responses to age-related myelin breakdown. Neurobiol. Aging 32, 1341–1371 (2011).
  12. Gouw, A. A. et al. Heterogeneity of white matter hyperintensities in Alzheimer’s disease: post-mortem quantitative MRI and neuropathology. Brain 131, 3286–3298 (2008).
  13. Bartzokis, G. et al. MRI evaluation of brain iron in earlier- and later-onset Parkinson’s disease and normal subjects. Magn. Reson. Imaging 17, 213–222 (1999).
  14. Berg, D. et al. Brain iron pathways and their relevance to Parkinson’s disease. J. Neurochem. 79, 225–236 (2001).
  15. Dexter, D. T. et al. Increased Nigral Iron Content and Alterations in Other Metal Ions Occurring in Brain in Parkinson’s Disease. J. Neurochem. 52, 1830–1836 (1989).
  16. Jellinger, P. D. K., Paulus, W., Grundke-Iqbal, I., Riederer, P. & Youdim, M. B. H. Brain iron and ferritin in Parkinson’s and Alzheimer’s diseases. J. Neural Transm. – Park. Dis. Dement. Sect. 2, 327–340 (1990).

 

Finn et al; networks showing most and least individuality and contributing factors.

A Needle in the Connectome: Neural ‘Fingerprint’ Identifies Individuals with ~93% accuracy

Much like we picture ourselves, we tend to assume that each individual brain is a bit of a unique snowflake. When running a brain imaging experiment it is common for participants or students to excitedly ask what can be revealed specifically about them given their data. Usually, we have to give a disappointing answer – not all that much, as neuroscientists typically throw this information away to get at average activation profiles set in ‘standard’ space. Now a new study published today in Nature Neuroscience suggests that our brains do indeed contain a kind of person-specific fingerprint, hidden within the functional connectome. Perhaps even more interesting, the study suggests that particular neural networks (e.g. frontoparietal and default mode) contribute the greatest amount of unique information to your ‘neuro-profile’ and also predict individual differences in fluid intelligence.

To do so lead author Emily Finn and colleagues at Yale University analysed repeated sets of functional magnetic resonance imaging (fMRI) data from 128 subjects over 6 different sessions (2 rest, 4 task), derived from the Human Connectome Project. After dividing each participant’s brain data into 268 nodes (a technique known as “parcellation”), Emily and colleagues constructed matrices of the pairwise correlation between all nodes. These correlation matrices (below, figure 1b), which encode the connectome or connectivity map for each participant, were then used in a permutation based decoding procedure to determine how accurately a participant’s connectivity pattern could be identified from the rest. This involved taking a vector of edge values (connection strengths) from a participant in the training set and correlating it with a similar vector sampled randomly with replacement from the test set (i.e. testing whether one participant’s data correlated with another’s). Pairs with the highest correlation where then labelled “1” to indicate that the algorithm assigned a matching identity between a particular train-test pair. The results of this process were then compared to a similar one in which both pairs and subject identity were randomly permuted.

Finn et al's method for identifying subjects from their connectomes.
Finn et al’s method for identifying subjects from their connectomes.

At first glance, the results are impressive:

Identification was performed using the whole-brain connectivity matrix (268 nodes; 35,778 edges), with no a priori network definitions. The success rate was 117/126 (92.9%) and 119/126 (94.4%) based on a target-database of Rest1-Rest2 and the reverse Rest2-Rest1, respectively. The success rate ranged from 68/126 (54.0%) to 110/126 (87.3%) with other database and target pairs, including rest-to-task and task-to-task comparisons.

This is a striking result – not only could identity be decoded from one resting state scan to another, but the identification also worked when going from rest to a variety of tasks and vice versa. Although classification accuracy dropped when moving between different tasks, these results were still highly significant when compared to the random shuffle, which only achieved a 5% success rate. Overall this suggests that inter-individual patterns in connectivity are highly reproducible regardless of the context from which they are obtained.

The authors then go on to perform a variety of crucial control analyses. For example, one immediate worry is that that the high identification might be driven by head motion, which strongly influences functional connectivity and is likely to show strong within-subject correlation. Another concern might be that the accuracy is driven primarily by anatomical rather than functional features. The authors test both of these alternative hypotheses, first by applying the same decoding approach to an expanded set of root-mean square motion parameters and second by testing if classification accuracy decreased as the data were increasingly smoothed (which should eliminate or reduce the contribution of anatomical features). Here the results were also encouraging: motion was totally unable to predict identity, resulting in less than 5% accuracy, and classification accuracy remained essentially the same across smoothing kernels. The authors further tested the contribution of their parcellation scheme to the more common and coarse-grained Yeo 8-network solution. This revealed that the coarser network division seemed to decrease accuracy, particularly for the fronto-parietal network, a decrease that was seemingly driven by increased reliability of the diagonal elements of the inter-subject matrix (which encode the intra-subject correlation). The authors suggest this may reflect the need for higher spatial precision to delineate individual patterns of fronto-parietal connectivity. Although this intepretation seems sensible, I do have to wonder if it conflicts with their smoothing-based control analysis. The authors also looked at how well they could identify an individual based on the variability of the BOLD signal in each region and found that although this was also significant, it showed a systematic decrease in accuracy compared to the connectomic approach. This suggests that although at least some of what makes an individual unique can be found in activity alone, connectivity data is needed for a more complete fingerprint. In a final control analysis (figure 2c below), training simultaneously on multiple data sets (for example a resting state and a task, to control inherent differences in signal length) further increased accuracy to as high as 100% in some cases.

Finn et al; networks showing most and least individuality and contributing factors.
Finn et al; networks showing most and least individuality and contributing factors. Interesting to note that sensory areas are highly common across subjects whereas fronto-parietal and mid-line show the greatest individuality!

Having established the robustness of their connectome fingerprints, Finn and colleagues then examined how much each individual cortical node contributed to the identification accuracy. This analysis revealed a particularly interesting result; frontal-parietal and midline (‘default mode’) networks showed the highest contribution (above, figure 2a), whereas sensory areas appeared to not contribute at all. This compliments their finding that the more coarse grained Yeo parcellation greatly reduced the contribution of these networks to classificaiton accuracy. Further still, Finn and colleagues linked the contributions of these networks to behavior, examining how strongly each network fingerprint predicted an overall index of fluid intelligence (g-factor). Again they found that fronto-parietal and default mode nodes were the most predictive of inter-individual differences in behaviour (in opposite directions, although I’d hesitate to interpret the sign of this finding given the global signal regression).

So what does this all mean? For starters this is a powerful demonstration of the rich individual information that can be gleaned from combining connectome analyses with high-volume data collection. The authors not only showed that resting state networks are highly stable and individual within subjects, but that these signatures can be used to delineate the way the brain responds to tasks and even behaviour. Not only is the study well powered, but the authors clearly worked hard to generalize their results across a variety of datasets while controlling for quite a few important confounds. While previous studies have reported similar findings in structural and functional data, I’m not aware of any this generalisable or specific. The task-rest signature alone confirms that both measures reflect a common neural architecture, an important finding. I could be a little concerned about other vasculature or breath-related confounds; the authors do remove such nuisance variables though, so this may not be a serious concern (though I am am not convinced their use of global signal regression to control these variables is adequate). These minor concerns none-withstanding, I found the network-specific results particularly interesting; although previous studies indicate that functional and structural heterogeneity greatly increases along the fronto-parietal axis, this study is the first demonstration to my knowledge of the extremely high predictive power embedded within those differences. It is interesting to wonder how much of this stability is important for the higher-order functions supported by these networks – indeed it seems intuitive that self-awareness, social cognition, and cognitive control depend upon acquired experiences that are highly individual. The authors conclude by suggesting that future studies may evaluate classification accuracy within an individual over many time points, raising the interesting question: Can you identify who I am tomorrow by how my brain connects today? Or am I “here today, gone tomorrow”?

Only time (and connectomics) may tell…


 

edit:

thanks to Kate Mills for pointing out this interesting PLOS ONE paper from a year ago (cited by Finn et al), that used similar methods and also found high classification accuracy, albeit with a smaller sample and fewer controls:

http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0111048

 

edit2:

It seems there was a slight mistake in my understanding of the methods – see this useful comment by lead author Emily Finn for clarification:

https://neuroconscience.com/2015/10/12/a-needle-in-the-connectome-neural-fingerprint-identifies-individuals-with-93-accuracy/#comment-36506


corrections? comments? want to yell at me for being dumb? Let me know in the comments or on twitter @neuroconscience!

brainonfire

Are we watching a paradigm shift? 7 hot trends in cognitive neuroscience according to me

brainonfire

In the spirit of procrastination, here is a random list I made up of things that seem to be trending in cognitive neuroscience right now, with a quick description of each. These are purely pulled from the depths of speculation, so please do feel free to disagree. Most of these are not actually new concepts, it’s more about they way they are being used that makes them trendy areas.


7 hot trends in cognitive neuroscience according to me

Oscillations

Obviously oscillations have been around for a long time, but the rapid increase of technological sophistication for direct recordings (see for example high density cortical arrays and deep brain stimulation + recording) coupled with greater availability of MEG (plus rapid advance in MEG source reconstruction and analysis techniques) have placed large-scale neural oscillations at the forefront of cognitive neuroscience. Understanding how different frequency bands interact (e.g. phase coupling) has become a core topic of research in areas ranging from conscious awareness to memory and navigation.

Complex systems, dynamics, and emergence

Again, a concept as old as neuroscience itself, but this one seems to be piggy-backing on several trends towards a new resurgence. As neuroscience grows bored of blobology, and our analysis methods move increasingly towards modelling dynamical interactions (see above) and complex networks, our explanatory metaphors more frequently emphasize brain dynamics and emergent causation. This is a clear departure from the boxological approach that was so prevalent in the 80’s and 90’s.

Direct intervention and causal inference

Pseudo-invasive techniques like transcranial direct-current stimulation are on the rise, partially because they allow us to perform virtual lesion studies in ways not previously possible. Likewise, exponential growth of neurobiological and genetic techniques has ushered in the era of optogenetics, which allows direct manipulation of information processing at a single neuron level. Might this trend also reflect increased dissatisfaction with the correlational approaches that defined the last decade? You could also include steadily increasing interest in pharmacological neuroimaging under this category.

Computational modelling and reinforcement learning

With the hype surrounding Google’s £200 million acquisition of Deep Mind, and the recent Nobel Prize award for the discovery of grid cells, computational approaches to neuroscience are hotter than ever. Hardly a day goes by without a reinforcement learning or similar paper being published in a glossy high-impact journal. This one takes many forms but it is undeniable that model-based approaches to cognitive neuroscience are all the rage. There is also a clear surge of interest in the Bayesian Brain approach, which could almost have it’s own bullet point. But that would be too self serving  ;)

Gain control

Gain control is a very basic mechanism found throughout the central nervous system. It can be understood as the neuromodulatory weighting of post-synaptic excitability, and is thought to play a critical role in contextualizing neural processing. Gain control might for example allow a neuron that usually encodes a positive prediction error to ‘flip’ its sign to encode negative prediction error under a certain context. Gain is thought to be regulated via the global interaction of neural modulators (e.g. dopamine, acetylcholine) and links basic information theoretic processes with neurobiology. This makes it a particularly desirable tool for understanding everything from perceptual decision making to basic learning and the stabilization of oscillatory dynamics. Gain control thus links computational, biological, and systems level work and is likely to continue to attract a lot of attention in the near future.

Hierarchies that are not really hierarchies

Neuroscience loves its hierarchies. For example, the Van Essen model of how visual feature detection proceeds through a hierarchy of increasingly abstract functional processes is one of the core explanatory tools used to understand vision in the brain. Currently however there is a great deal of connectomic and functional work pointing out interesting ways in which global or feedback connections can re-route and modulate processes from the ‘top’ directly to the ‘bottom’ or vice versa. It’s worth noting this trend doesn’t do away with the old notions of hierarchies, but instead just renders them a bit more complex and circular. Put another way, it is currently quite trendy to show ‘the top is the bottom’ and ‘the bottom is the top’. This partially relates to the increased emphasis on emergence and complexity discussed above. A related trend is extension of what counts as the ‘bottom’, with low-level subcortical or even first order peripheral neurons suddenly being ascribed complex abilities typically reserved for cortical processes.

Primary sensations that are not so primary

Closely related to the previous point, there is a clear trend in the perceptual sciences of being increasingly liberal about how ‘primary’ sensory areas really are. I saw this first hand at last year’s Vision Sciences Society which featured at least a dozen posters showing how one could decode tactile shape from V1, or visual frequency from A1, and so on. Again this is probably related to the overall movement towards complexity and connectionism; as we lose our reliance on modularity, we’re suddenly open to a much more general role for core sensory areas.


Interestingly I didn’t include things like multi-modal or high resolution imaging as I think they are still actually emerging and have not quite fully arrived yet. But some of these – computational and connectomic modelling for example – are clearly part and parcel of contemporary zeitgeist. It’s also very interesting to look over this list, as there seems to be a clear trend towards complexity, connectionism, and dynamics. Are we witnessing a paradigm shift in the making? Or have we just forgotten all our first principles and started mangling any old thing we can get published? If it is a shift, what should we call it? Something like ‘computational connectionism’ comes to mind. Please feel free to add points or discuss in the comments!

Top 200 terms in cognitive neuroscience according to neurosynth

Tonight I was playing around with some of the top features in neurosynth (the searchable terms with the highest number of studies containing that term). You can find the list here, just sort by the number of studies. I excluded the top 3 terms which are boring (e.g. “image”, “response”, and “time”)  and whose extremely high weights would mess up the wordle. I then created a word-cloud weighted so that the size reflects the number of studies for each term.

Here are the top 200 terms sized according to number times reported in neurosynth’s 5809 indexed fMRI studies:

wordle

Pretty neat! These are the 200 terms the neurosynth database has the most information on, and is a pretty good overview of key concepts and topics in our field! I am sure there is something useful for everyone in there😀

Direct link to the wordle:

Wordle: neurosynth

Twitter Follow-up: Can MVPA Invalidate Simulation Theory?

Thanks to the wonders of social media, while I was out grocery shopping I received several interesting and useful responses to my previous post on the relationship between multivariate pattern analysis and simulation theory. Rather than try and fit my responses into 140 characters, I figured i’d take a bit more space here to hash them out. I think the idea is really enhanced by these responses, which point to several findings and features of which I was not aware. The short answer seems to be, no MVPA does not invalidate simulation theory (ST) and may even provide evidence for it in the realm of motor intentions, but that we might be able to point towards a better standard of evidence for more exploratory applications of ST (e.g. empathy-for-pain). An important point to come out of these responses as one might expect, is that the interpretation of these methodologies is not always straightforward.

I’ll start with Antonia Hamilton’s question, as it points to a bit of literature that speaks directly to the issue:

antonio_reply

Antonia is referring to this paper by Oosterhof and colleagues, where they directly compare passive viewing and active performance of the same paradigm using decoding techniques. I don’t read nearly as much social cognition literature as I used to, and wasn’t previously aware of this paper. It’s really a fascinating project and I suggest anyone interested in this issue read it at once (it’s open access, yay!). In the introduction the authors point out that spatial overlap alone cannot demonstrate equivalent mechanisms for viewing and performing the same action:

Numerous functional neuroimaging studies have identified brain regions that are active during both the observation and the execution of actions (e.g., Etzel et al. 2008; Iacoboni et al. 1999). Although these studies show spatial overlap of frontal and parietal activations elicited by action observation and execution, they do not demonstrate representational overlap between visual and motor action representations. That is, spatially overlapping activations could reflect different neural populations in the same broad brain regions (Gazzola and Keysers 2009; Morrison and Downing 2007; Peelen and Downing 2007b). Spatial overlap of activations per se cannot establish whether the patterns of neural response are similar for a given action (whether it is seen or performed) but different for different actions, an essential property of the “mirror system” hypothesis.”

They then go on to explain that while MVPA could conceivably demonstrate a simulation-like mechanism (i.e. a common neural representation for viewing/doing), several previous papers attempting to show just that failed to do so. The authors suggest that this may be due to a variety of methodological limitations, which they set out to correct for in their JNPhys publication. Oosterhof et al show that clusters of voxels located primarily in the intraparietal and superior temporal sulci encode cross-modal information, that is code similar information both when viewing and doing:

Click to go to PDF.
From Oosterhof et al, showing combined classification accuray for (train see, test do; train do, test see).

Essentially Oosterhof et al trained their classifier on one modality (see or do) , tested the classifier on the opposite modality in another session, and then repeated this procedure for all possible combinations of session and modality (while appropriately correcting for multiple comparisons). The map above represents the combined classification accuracy from both train-test combinations; interestingly in the supplementary info they show that the maps do slightly differ depend on what was trained:

Click to go to SI.
From supplementary info, A shows classifier trained on see, tested on do, B shows the opposite.

Oosterhof and colleagues also investigate the specificity of information for particular gestures in a second experiment, but for our purposes lets focus on just the first. My first thought is that this does actually provide some evidence for a simulation theory of understanding motor intentions. Clearly there is enough information in each modality to accurately decode the opposite modality: there are populations of neurons encoding similar information both for action execution and perception. Realistically I think this has to be the minimal burden of proof needed to consider an imaging finding to be evidence for simulation theory. So the results of Oosterhof et al do provide supporting evidence for simulation theory in the domain of motor intentions.

Nonetheless, the results also strengthen the argument that more exploratory extentions of ST (like empathy-for-pain) must be held to a similar burden of proof before generalization in these domains is supported. Simply showing spatial overlap is not evidence of simulation, as Oosterhof themselves argue. I think it is interesting to note the slight spatial divergence between the two train-test maps (see on do, do on see). While we can obviously identify voxels encoding cross-modality information, it is interesting that those voxels do not subsume the entirety of whatever neural computation relates these two modalities; each has something unique to predict in the other. I don’t think that observation invalidates simulation theory, but it might suggest an interesting mechanism not specified in the ‘vanilla’ flavor of ST. To be extra boring, it would be really nice to see an independent replication of this finding, since as Oosterhof themselves point out, the evidence for cross-modal information is inconsistent across studies. Even though the classifier performs well above chance in this study,  it is also worth noting that the majority of surviving voxels in their study show somewhere around 40-50% classification accuracy, not exactly gangbusters. It would be interesting to see if they could identify voxels within these regions that selectively encode only viewing or performing; this might be evidence for a hybrid-theory account of motor intentions.

leoreply

Leonhard’s question is an interesting one that I don’t have a ready response for. As I understand it, the idea is that demonstrating no difference of patterns between a self and other-related condition (e.g. performing an action vs watching someone else do it) might actually be an argument for simulation, since this could be caused by that region using isomorphic computations for both conditions. This an interesting point – i’m not sure what the status of null findings is in the decoding literature, but this merits further thought.

The next two came from James Kilner and Tal Yarkoni. I’ve put them together as I think they fall under a more methodological class of questions/comments and I don’t feel quite experienced enough to answer them- but i’d love to hear from someone with more experience in multivariate/multivoxel techniques:

kilner_reply

talreply

James Kilner asks about the performance of MVPA in the case that the pattern might be spatially overlapping but not identical for two conditions. This is an interesting question and i’m not sure I know the correct answer; my intuition is that you could accurately discriminate both conditions using the same voxels and that this would be strong evidence against a simple simulation theory account (spatial overlap but representational heterogeneity).

Here is more precise answer to James’ question from Sam Schwarzkopf, posted in the comments of the original post:

2. The multivariate aspect obviously adds sensitivity by looking at pattern information, or generally any information of more than one variable (e.g. voxels in a region). As such it is more sensitive to the information content in a region than just looking at the average response from that region. Such an approach can reveal that region A contains some diagnostic information about an experimental variable while region B does not, even though they both show the same mean activation. This is certainly useful knowledge that can help us advance our understanding of the brain – but in the end it is still only one small piece in the puzzle. And as both Tal and James pointed out (in their own ways) and as you discussed as well, you can’t really tell what the diagnostic information actually represents.
Conversely, you can’t be sure that just because MVPA does not pick up diagnostic information from a region that it therefore doesn’t contain any information about the variable of interest. MVPA can only work as long as there is a pattern of information within the features you used.

This last point is most relevant to James’ comment. Say you are using voxels as features to decode some experimental variable. If all the neurons with different tuning characteristics in an area are completely intermingled (like orientation-preference in mouse visual cortex for instance) you should not really see any decoding – even if the neurons in that area are demonstrably selective to the experimental variable.

In general it is clear that the interpretation of decoded patterns is not straightforward- it isn’t clear precisely what information they reflect, and it seems like if a region contained a totally heterogeneous population of neurons you wouldn’t pick up any decoding at all. With respect to ST,  I don’t know if this completely invalidates our ability to test predictions- I don’t think one would expect such radical heterogeneity in a region like STS, but rather a few sub-populations responding selectively to self and other, which MVPA might be able to reveal. It’s an important point to consider though.

Tal’s point is an important one regarding the different sources of information that GLM and MVPA techniques pick up. The paper he refers to by Jimura and Poldrack set out to investigate exactly this by comparing the spatial conjunction and divergent sensitivity of each method. Importantly they subtracted the mean of each beta-coefficient from the multivariate analysis to insure that the analysis contained only information not in the GLM:

pold_mvpa

As you can see in the above, Jimura and Poldrack show that MVPA picks up a large number of voxels not found in the GLM analysis. Their interpretation is that the GLM is designed to pick up regions responding globally or in most cases to stimulation, whereas MVPA likely picks up globally distributed responses that show variance in their response. This is a bit like the difference between functional integration and localization; both are complementary to the understanding of some cognitive function. I take Tal’s point to be that the MVPA and GLM are sensitive to different sources of information and that this blurs the ability of the technique to evaluate simulation theory- you might observe differences between the two that would resemble evidence against ST (different information in different areas) when in reality you would be modelling altogether different aspects of the cognition. edit: after more discussion with Tal on Twitter, it’s clear that he meant to point out the ambiguity inherent in interpreting the predictive power of MVPA; by nature these analyses will pick up a lot of confounding a causal noise- arousal, reaction time, respiration, etc, which would be excluded in a GLM analysis. So these are not necessarily or even likely to be “direct read-outs” of representations, particularly to the extent that such confounds correlate with the task. See this helpful post by neuroskeptic for an overview of one recent paper examining this issue. See here for a study investigating the complex neurovascular origins of MVPA for fMRI. 

Thanks sincerely for these responses, as it’s been really interesting and instructive for me to go through these papers and think about their implications. I’m still new to these techniques and it is exciting to gain a deeper appreciation of the subtleties involved in their interpretation. On that note, I must direct you to check out Sam Schwarzkopf’s excellent reply to my original post. Sam points out some common misunderstandings (of which I am perhaps guilty of several) regarding the interpretation of MVPA/decoding versus GLM techniques, arguing essentially that they pick up much of the same information and can both be considered ‘decoding’ in some sense, further muddying their ability to resolves debates like that surrounding simulation theory.

Will multivariate decoding spell the end of simulation theory?

Decoding techniques such as multivariate pattern analysis (MVPA) are hot stuff in cognitive neuroscience, largely because they offer a tentative promise of actually reading out the underlying computations in a region rather than merely describing data features (e.g. mean activation profiles). While I am quite new to MVPA and similar machine learning techniques (so please excuse any errors in what follows), the basic process has been explained to me as a reversal of the X and Y variables in a typical general linear model. Instead of specifying a design matrix of explanatory (X) variables and testing how well those predict a single independent (Y) variable (e.g. the BOLD timeseries in each voxel), you try to estimate an explanatory variable (essentially decoding the ‘design matrix’ that produced the observed data) from many Y variables, for example one Y variable per voxel (hence the multivariate part). The decoded explanatory variable then describes (BOLD) responses in way that can vary in space, rather than reflecting an overall data feature across a set of voxels such as mean or slope. Typically decoding analyses proceed in two steps, one in which you train the classifier on some set of voxels and another where you see how well that trained model can classify patterns of activity in another scan or task. It is precisely this ability to detect patterns in subtle spatial variations that makes MVPA an attractive technique- the GLM simply doesn’t account for such variation.

The implicit assumption here is that by modeling subtle spatial variations across a set of voxels, you can actually pick up the neural correlates of the underlying computation or representation (Weil and Rees, 2010, Poldrack, 2011). To illustrate the difference between an MVPA and GLM analysis, imagine a classical fMRI experiment where we have some set of voxels defining a region with a significant mean response to your experimental manipulation. All the GLM can tell us is that in each voxel the mean response is significantly different from zero. Each voxel within the significant region is likely to vary slightly in its actual response- you might imagine all sorts of subtle intensity variations within a significant region- but the GLM essentially ignores this variation. The exciting assumption driving interest in decoding is that this variability might actually reflect the activity of sub-populations of neurons and by extension, actual neural representations. MVPA and similar techniques are designed to pick out when these reflect a coherent pattern; once identified this pattern can be used to “predict” when the subject was seeing one or another particular stimulus. While it isn’t entirely straightforward to interpret the patterns MVPA picks out as actual ‘neural representations’, there is some evidence that the decoded models reflect a finer granularity of neural sub-populations than represented in overall mean activation profiles (Todd, 2013; Thompson 2011).

Professor Xavier applies his innate talent for MVPA.
Professor Xavier applies his innate talent for MVPA.

As you might imagine this is terribly exciting, as it presents the possibility to actually ‘read-out’ the online function of some brain area rather than merely describing its overall activity. Since the inception of brain scanning this has been exactly the (largely failed) promise of imaging- reverse inference from neural data to actual cognitive/perceptual contents. It is understandable then that decoding papers are the ones most likely to appear in high impact journals- just recently we’ve seen MVPA applied to dream states, reconstruction of visual experience, and pain experience all in top journals (Kay et al., 2008, Horikawa et al., 2013, Wager et al., 2013). I’d like to focus on that last one for the remainer of this post, as I think we might draw some wide-reaching conclusions for theoretical neuroscience as a whole from Wager et al’s findings.

Francesca and I were discussing the paper this morning- she’s working on a commentary for a theoretical paper concerning the role of the “pain matrix” in empathy-for-pain research. For those of you not familiar with this area, the idea is a basic simulation-theory argument-from-isomorphism. Simulation theory (ST) is just the (in)famous idea that we use our own motor system (e.g. mirror neurons) to understand the gestures of others. In a now infamous experiment Rizzolatti et al showed that motor neurons in the macaque monkey responded equally to their own gestures or the gestures of an observed other (Rizzolatti and Craighero, 2004). They argued that this structural isomorphism might represent a general neural mechanism such that social-cognitive functions can be accomplished by simply applying our own neural apparatus to work out what was going on for the external entity. With respect to phenomena such empathy for pain and ‘social pain’ (e.g. viewing a picture of someone you broke up with recently), this idea has been extended to suggest that, since a region of networks known as “the pain matrix” activates similarly when we are in pain or experience ‘social pain’, that we “really feel” pain during these states (Kross et al., 2011) [1].

In her upcoming commentary, Francesca points out an interesting finding in the paper by Wager and colleagues that I had overlooked. Wager et al apply a decoding technique in subjects undergoing painful and non-painful stimulation. Quite impressively they are then able to show that the decoded model predicts pain intensity in different scanners and various experimental manipulations. However they note that the model does not accurately predict subject’s ‘social pain’ intensity, even though the subjects did activate a similar network of regions in both the physical and social pain tasks (see image below). One conclusion from these findings it that it is surely premature to conclude that because a group of subjects may activate the same regions during two related tasks, those isomorphic activations actually represent identical neural computations [2]. In other words, arguments from structural isomorpism like ST don’t provide any actual evidence for the mechanisms they presuppose.

Figure from Wager et al demonstrating specificity of classifier for pain vs warmth and pain vs rejection. Note poor receiver operating curve (ROC) for 'social pain' (rejecter vs friend), although that contrast picks out similar regions of the 'pain matrix'.
Figure from Wager et al demonstrating specificity of classifier for pain vs warmth and pain vs rejection. Note poor receiver operating curve (ROC) for ‘social pain’ (rejecter vs friend), although that contrast picks out similar regions of the ‘pain matrix’.

To me this is exactly the right conclusion to take from Wager et al and similar decoding papers. To the extent that the assumption that MVPA identifies patterns corresponding to actual neural representations holds, we are rapidly coming to realize that a mere mean activation profile tells us relatively little about the underlying neural computations [3]. It certainly does not tell us enough to conclude much of anything on the basis that a group of subjects activate “the same brain region” for two different tasks. It is possible and even likely that just because I activate my motor cortex when viewing you move, I’m doing something quite different with those neurons than when I actually move about. And perhaps this was always the problem with simulation theory- it tries to make the leap from description (“similar brain regions activate for X and Y”) to mechanism, without actually describing a mechanism at all. I guess you could argue that this is really just a much fancier argument against reverse inference and that we don’t need MVPA to do away with simulation theory. I’m not so sure however- ST remains a strong force in a variety of domains. If decoding can actually do away with ST and arguments from isomorphism or better still, provide a reasonable mechanism for simulation, it’ll be a great day in neuroscience. One thing is clear- model based approaches will continue to improve cognitive neuroscience as we go beyond describing what brain regions activate during a task to actually explaining how those regions work together to produce behavior.

I’ve curated some enlightening responses to this post in a follow-up – worth checking for important clarifications and extensions! See also the comments on this post for a detailed explanation of MVPA techniques. 

References

Horikawa T, Tamaki M, Miyawaki Y, Kamitani Y (2013) Neural Decoding of Visual Imagery During Sleep. Science.

Kay KN, Naselaris T, Prenger RJ, Gallant JL (2008) Identifying natural images from human brain activity. Nature 452:352-355.

Kross E, Berman MG, Mischel W, Smith EE, Wager TD (2011) Social rejection shares somatosensory representations with physical pain. Proceedings of the National Academy of Sciences 108:6270-6275.

Poldrack RA (2011) Inferring mental states from neuroimaging data: from reverse inference to large-scale decoding. Neuron 72:692-697.

Rizzolatti G, Craighero L (2004) The mirror-neuron system. Annu Rev Neurosci 27:169-192.

Thompson R, Correia M, Cusack R (2011) Vascular contributions to pattern analysis: Comparing gradient and spin echo fMRI at 3T. Neuroimage 56:643-650.

Todd MT, Nystrom LE, Cohen JD (2013) Confounds in Multivariate Pattern Analysis: Theory and Rule Representation Case Study. NeuroImage.

Wager TD, Atlas LY, Lindquist MA, Roy M, Woo C-W, Kross E (2013) An fMRI-Based Neurologic Signature of Physical Pain. New England Journal of Medicine 368:1388-1397.

Weil RS, Rees G (2010) Decoding the neural correlates of consciousness. Current opinion in neurology 23:649-655.


[1] Interestingly this paper comes from the same group (Wager et al) showing that pain matrix activations do NOT predict ‘social’ pain. It will be interesting to see how they integrate this difference.

[2] Nevermind the fact that the ’pain matrix’ is not specific for pain.

[3] With all appropriate caveats regarding the ability of decoding techniques to resolve actual representations rather than confounding individual differences (Todd et al., 2013) or complex neurovascular couplings (Thompson et al., 2011).

Active-controlled, brief body-scan meditation improves somatic signal discrimination.

Here in the science blog-o-sphere we often like to run to the presses whenever a laughably bad study comes along, pointing out all the incredible feats of ignorance and sloth. However, this can lead to science-sucks cynicism syndrome (a common ailment amongst graduate students), where one begins to feel a bit like all the literature is rubbish and it just isn’t worth your time to try and do something truly proper and interesting. If you are lucky, it is at this moment that a truly excellent paper will come along at the just right time to pick up your spirits and re-invigorate your work. Today I found myself at one such low-point, struggling to figure out why my data suck, when just such a beauty of a paper appeared in my RSS reader.

data_sensing (1)The paper, “Brief body-scan meditation practice improves somatosensory perceptual decision making”, appeared in this month’s issue of Consciousness and Cognition. Laura Mirams et al set out to answer a very simple question regarding the impact of meditation training (MT) on a “somatic signal detection task” (SSDT). The study is well designed; after randomization, both groups received audio CDs with 15 minutes of daily body-scan meditation or excerpts from The Lord of The Rings. For the SSD task, participants simply report when they felt a vibration stimulus on the finger, where the baseline vibration intensity is first individually calibrated to a 50% detection rate. The authors then apply a signal-detection analysis framework to discern the sensitivity or d’ and decision criteria c.

Mirams et al found that, even when controlling for a host of baseline factors including trait mindfulness and baseline somatic attention, MT led to a greater increase in d’ driven by significantly reduced false-alarms. Although many theorists and practitioners of MT suggest a key role for interoceptive & somatic attention in related alterations of health, brain, and behavior, there exists almost no data addressing this prediction, making these findings extremely interesting. The idea that MT should impact interoception and somatosensation is very sensible- in most (novice) meditation practices it is common to focus attention to bodily sensations of, for example, the breath entering the nostril. Further, MT involves a particular kind of open, non-judgemental awareness of bodily sensations, and in general is often described to novice students as strengthening the relationship between the mind and sensations of the body. However, most existing studies on MT investigate traditional exteroceptive, top-down elements of attention such as conflict resolution and the ability to maintain attention fixation for long periods of time.

While MT certainly does involve these features, it is arguable that the interoceptive elements are more specific to the precise mechanisms of interest (they are what you actually train), whereas the attentional benefits may be more of a kind of side effect, reflecting an early emphasis in MT on establishing attention. Thus in a traditional meditation class, you might first learn some techniques to fixate your attention, and then later learn to deploy your attention to specific bodily targets (i.e. the breath) in a particular way (non-judgmentally). The goal is not necessarily to develop a super-human ability to filter distractions, but rather to change the way in which interoceptive responses to the world (i.e. emotional reactions) are perceived and responded to. This hypothesis is well reflected in the elegant study by Mirams et al; they postulate specifically that MT will lead to greater sensitivity (d’), driven by reduced false alarms rather than an increased hit-rate, reflecting a greater ability to discriminate the nature of an interoceptive signal from noise (note: see comments for clarification on this point by Steve Fleming – there is some ambiguity in interpreting the informational role of HR and FA in d’). This hypothesis not only reflects the theoretically specific contribution of MT (beyond attention training, which might be better trained by video games for example), but also postulates a mechanistically specific hypothesis to test this idea, namely that MT leads to a shift specifically in the quality of interoceptive signal processing, rather than raw attentional control.

At this point, you might ask if everyone is so sure that MT involves training interoception, why is there so little data on the topic? The authors do a great job reviewing findings (even including currently in-press papers) on interoception and MT. Currently there is one major null finding using the canonical heartbeat detection task, where advanced practitioners self-reported improved heart beat detection but in reality performed at chance. Those authors speculated that the heartbeat task might not accurately reflect the modality of interoception engaged in by practitioners. In addition a recent study investigated somatic discrimination thresholds in a cross-section of advanced practitioners and found that the ability to make meta-cognitive assessments of ones’ threshold sensitivity correlated with years of practice. A third recent study showed greater tactile sensation acuity in practitioners of Tai Chi.  One longitudinal study [PDF], a wait-list controlled fMRI investigation by Farb et al, found that a mindfulness-based stress reduction course altered BOLD responses during an attention-to-breath paradigm. Collectively these studies do suggest a role of MT in training interoception. However, as I have complained of endlessly, cross-sections cannot tell us anything about the underlying causality of the observed effects, and longitudinal studies must be active-controlled (not waitlisted) to discern mechanisms of action. Thus active-controlled longitudinal designs are desperately needed, both to determine the causality of a treatment on some observed effect, and to rule out confounds associated with motivation, demand-characteristic, and expectation. Without such a design, it is very difficult to conclude anything about the mechanisms of interest in an MT intervention.

In this regard, Mirams went above and beyond the call of duty as defined by the average paper. The choice of delivering the intervention via CD is excellent, as we can rule out instructor enthusiasm/ability confounds. Further the intervention chosen is extremely simple and well described; it is just a basic body-scan meditation without additional fluff or fanfare, lending to mechanistic specificity. Both groups were even instructed to close their eyes and sit when listening, balancing these often overlooked structural factors. In this sense, Mirams et al have controlled for instruction, motivation, intervention context, baseline trait mindfulness, and even isolated the variable of interest- only the MT group worked with interoception, though both exerted a prolonged period of sustained attention. Armed with these controls we can actually say that MT led to an alteration in interoceptive d’, through a mechanism dependent upon on the specific kind of interoceptive awareness trained in the intervention.

It is here that I have one minor nit-pick of the paper. Although the use of Lord of the Rings audiotapes is with precedent, and likely a great control for attention and motivation, you could be slightly worried that reading about Elves and Orcs is not an ideal control for listening to hours of tapes instructing you to focus on your bodily sensations, if the measure of interest involves fixating on the body. A pure active control might have been a book describing anatomy or body parts; then we could exhaustively conclude that not only is it interoception driving the findings, but the particular form of interoceptive attention deployed by meditation training. As it is, a conservative person might speculate that the observed differences reflect demand characteristics- MT participants deploy more attention to the body due to a kind of priming mechanism in the teaching. However this is an extreme nitpick and does not detract from the fact that Mirams and co-authors have made an extremely useful contribution to the literature. In the future it would be interesting to repeat the paradigm with a more body-oriented control, and perhaps also in advanced practitioners before and after an intensive retreat to see if the effect holds at later stages of training. Of course, given my interest in applying signal-detection theory to interoceptive meta-cognition, I also cannot help but wonder what the authors might have found if they’d applied a Fleming-style meta-d’ analysis to this study.

All in all, a clear study with tight methods, addressing a desperately under-developed research question, in an elegant fashion. The perfect motivation to return to my own mangled data ☺

Enactive Bayesians? Response to “the brain as an enactive system” by Gallagher et al

Shaun Gallagher has a short new piece out with Hutto, Slaby, and Cole and I felt compelled to comment on it. Shaun was my first mentor and is to thank for my understanding of what is at stake in a phenomenological cognitive science. I jumped on this piece when it came out because, as I’ve said before, enactivists often  leave a lot to be desired when talking about the brain. That is to say, they more often than not leave it out entirely and focus instead on bodies, cultural practices, and other parts of our extra-neural milieu. As a neuroscientist who is enthusiastically sympathetic to the embodied, enactive approach to cognition, I find this worrisome. Which is to say that when I’ve tried to conduct “neurophenomenological” experiments, I often feel a bit left in the rain when it comes time construct, analyze, and interpret the data.

As an “enactive” neuroscientist, I often find the de-emphasis of brains a bit troubling. For one thing, the radically phenomenological crew tends to make a lot of claims to altering the foundations of neuroscience. Things like information processing and mental representation are said to be stale, Cartesian constructs that lack ontological validity and want to be replaced. This is fine- I’m totally open to the limitations of our current explanatory framework. However as I’ve argued here, I believe neuroscience still has great need of these tools and that dynamical systems theory is not ready for prime time neuroscience. We need a strong positive account of what we should replace them with, and that account needs to act as a practical and theoretical guide to discovery.

One worry I have is that enactivism quickly begins to look like a constructivist version of behaviorism, focusing exclusively on behavior to the exclusion of the brain. Of course I understand that this is a bit unfair; enactivism is about taking a dynamical, encultured, phenomenological view of the human being seriously. Yet I believe to accomplish this we must also understand the function of the nervous system. While enactivists will often give token credit to the brain- affirming that is indeed an ‘important part’ of the cognitive apparatus, they seem quick to value things like clothing and social status over gray matter. Call me old fashioned but, you could strip me of job, titles, and clothing tomorrow and I’d still be capable of 80% of whatever I was before. Granted my cognitive system would undergo a good deal of strain, but I’d still be fully capable of vision, memory, speech, and even consciousness. The same can’t be said of me if you start magnetically stimulating my brain in interesting and devious ways.

I don’t want to get derailed arguing about the explanatory locus of cognition, as I think one’s stances on the matter largely comes down to whatever your intuitive pump tells you is important.  We could argue about it all day; what matters more than where in the explanatory hierarchy we place the brain, is how that framework lets us predict and explain neural function and behavior. This is where I think enactivism often fails; it’s all fire and bluster (and rightfully so!) when it comes to the philosophical weaknesses of empirical cognitive science, yet mumbles and missteps when it comes to giving positive advice to scientists. I’m all for throwing out the dogma and getting phenomenological, but only if there’s something useful ready to replace the methodological bathwater.

Gallagher et al’s piece starts:

 “… we see an unresolved tension in their account. Specifically, their questions about how the brain functions during interaction continue to reflect the conservative nature of ‘normal science’ (in the Kuhnian sense), invoking classical computational models, representationalism, localization of function, etc.”

This is quite true and an important tension throughout much of the empirical work done under the heading of enactivism. In my own group we’ve struggled to go from the inspiring war cries of anti-representationalism and interaction theory to the hard constraints of neuroscience. It often happens that while the story or theoretical grounding is suitably phenomenological and enactive, the methodology and their interpretation are necessarily cognitivist in nature.

Yet I think this difficulty points to the more difficult task ahead if enactivism is to succeed. Science is fundamentally about methodology, and methodology reflects and is constrained by one’s ontological/explanatory framework. We measure reaction times and neural signal lags precisely because we buy into a cognitivist framework of cognition, which essentially argues for computations that take longer to process with increasing complexity, recruiting greater neural resources. The catch is, without these things it’s not at all clear how we are to construct, analyze, and interpret our data.  As Gallagher et al correctly point out, when you set out to explain behavior with these tools (reaction times and brain scanners), you can’t really claim to be doing some kind of radical enactivism:

 “Yet, in proposing an enactive interpretation of the MNS Schilbach et al. point beyond this orthodox framework to the possibility of rethinking, not just the neural correlates of social cognition, but the very notion of neural correlate, and how the brain itself works.”

We’re all in agreement there: I want nothing more than to understand exactly how it is our cerebral organ accomplishes the impressive feats of locomotion, perception, homeostasis, and so on right up to consciousness and social cognition. Yet I’m a scientist and no matter what I write in my introduction I must measure something- and what I measure largely defines my explanatory scope. So what do Gallagher et al offer me?

 “The enactive interpretation is not simply a reinterpretation of what happens extra-neurally, out in the intersubjective world of action where we anticipate and respond to social affordances. More than this, it suggests a different way of conceiving brain function, specifically in non-representational, integrative and dynamical terms (see e.g., Hutto and Myin, in press).”

Ok, so I can’t talk about representations. Presumably we’ll call them “processes” or something like that. Whatever we call them, neurons are still doing something, and that something is important in producing behavior. Integrative- I’m not sure what that means, but I presume it means that whatever neurons do, they do it across sensory and cognitive modalities. Finally we come to dynamical- here is where it gets really tricky. Dynamical systems theory (DST) is an incredibly complex mathematical framework dealing with topology, fluid dynamics, and chaos theory. Can DST guide neuroscientific discovery?

This is a tough question. My own limited exposure to DST prevents me from making hard conclusions here. For now let’s set it aside- we’ll come back to this in a moment. First I want to get a better idea of how Gallagher et al characterize contemporary neuroscience, the source of this tension in Schillbach et al:

Functional MRI technology goes hand in hand with orthodox computational models. Standard use of fMRI provides an excellent tool to answer precisely the kinds of questions that can be asked within this approach. Yet at the limits of this science, a variety of studies challenge accepted views about anatomical and functional segregation (e.g., Shackman et al. 2011; Shuler and Bear 2006), the adequacy of short-term task- based fMRI experiments to provide an adequate conception of brain function (Gonzalez-Castillo et al. 2012), and individual differences in BOLD signal activation in subjects performing the same cognitive task (Miller et al. 2012). Such studies point to embodied phenomena (e.g., pain, emotion, hedonic aspects) that are not appropriately characterized in representational terms but are dynamically integrated with their central elaboration.

Claim one is what I’ve just argued above, that fMRI and similar tools presuppose computational cognitivism. What follows I feel is a mischaracterization of cognitive neuroscience. First we have the typical bit about functional segregation being extremely limited. It surely is and I think most neuroscientists today would agree that segregation is far from the whole story of the brain. Which is precisely why the field is undeniably and swiftly moving towards connectivity and functional integration, rather than segregation. I’d wager that for a few years now the majority of published cogneuro papers focus on connectivity rather than blobology.

Next we have a sort of critique of the use of focal cognitive tasks. This almost seems like a critique of science itself; while certainly not without limits, neuroscientists rely on such tasks in order to make controlled assessments of phenomena. There is nothing a priori that says a controlled experiment is necessarily cognitivist anymore so than a controlled physics experiment must necessarily be Newtonian rather than relativistic. And again, I’d characterize contemporary neuroscience as being positively in love with “task-free” resting state fMRI. So I’m not sure at what this criticism is aimed.

Finally there is this bit about individual differences in BOLD activation. This one I think is really a red herring; there is nothing in fMRI methodology that prevents scientists from assessing individual differences in neural function and architecture. The group I’m working with in London specializes in exactly this kind of analysis, which is essentially just creating regression models with neural and behavioral independent and dependent variables. There certainly is a lot of variability in brains, and neuroscience is working hard and making strides towards understanding those phenomena.

 “Consider also recent challenges to the idea that so-called “mentalizing” areas (“cortical midline structures”) are dedicated to any one function. Are such areas activated for mindreading (Frith and Frith 2008; Vogeley et al. 2001), or folk psychological narrative (Perner et al. 2006; Saxe & Kanwisher 2003); a default mode (e.g., Raichle et al. 2001), or other functions such as autobiographical memory, navigation, and future planning (see Buckner and Carroll 2006; 2007; Spreng, Mar and Kim 2008); or self -related tasks(Northoff & Bermpohl 2004); or, more general reflective problem solving (Legrand andRuby 2010); or are they trained up for joint attention in social interaction, as Schilbach etal. suggest; or all of the above and others yet to be discovered.

I guess this paragraph is supposed to get us thinking that these seem really different, so clearly the localizationist account of the MPFC fails. But as I’ve just said, this is for one a bit of a red herring- most neuroscientists no longer believe exclusively in a localizationist account. In fact more and more I hear top neuroscientists disparaging overly blobological accounts and referring to prefrontal cortex as a whole. Functional integration is here to stay. Further, I’m not sure I buy their argument that these functions are so disparate- it seems clear to me that they all share a social, self-related core probably related to the default mode network.

Finally, Gallagher and company set out to define what we should be explaining- behavior as “a dynamic relation between organisms, which include brains, but also their own structural features that enable specific perception-action loops involving social and physical environments, which in turn effect statistical regularities that shape the structure of the nervous system.” So we do want to explain brains, but we want to understand that their setting configures both neural structure and function. Fair enough, I think you would be hard pressed to find a neuroscientist who doesn’t agree that factors like environment and physiology shape the brain. [edit: thanks to Bryan Patton for pointing out in the comments that Gallagher’s description of behavior here is strikingly similar to accounts given by Friston’s Free Energy Principle predictive coding account of biological organisms]

Gallagher asks then, “what do brains do in the complex and dynamic mix of interactions that involve full-out moving bodies, with eyes and faces and hands and voices; bodies that are gendered and raced, and dressed to attract, or to work or play…?” I am glad to see that my former mentor and I agree at least on the question at stake, which seems to be, what exactly is it brains do? And we’re lucky in that we’re given an answer by Gallagher et al:

“The answer is that brains are part of a system, along with eyes and face and hands and voice, and so on, that enactively anticipates and responds to its environment.”

 Me reading this bit: “yep, ok, brains, eyeballs, face, hands, all the good bits. Wait- what?” The answer is “… a system that … anticipates and responds to its environment.” Did Karl Friston just enter the room? Because it seems to me like Gallagher et al are advocating a predictive coding account of the brain [note: see clarifying comment by Gallagher, and my response below]! If brains anticipate their environment then that means they are constructing a forward model of their inputs. A forward model is a Bayesian statistical model that estimates posterior probabilities of a stimulus from prior predictions about its nature. We could argue all day about what to call that model, but clearly what we’ve got here is a brain using strong internal models to make predictions about the world. Now what is “enactive” about these forward models seems like an extremely ambiguous notion.

To this extent, Gallagher includes “How an agent responds will depend to some degree on the overall dynamical state of the brain and the various, specific and relevant neuronal processes that have been attuned by evolutionary pressures, but also by personal experiences” as a description of how a prediction can be enactive. But none of this is precluded by the predictive coding account of the brain. The overall dynamical state (intrinsic connectivity?) of the brain amounts to noise that must be controlled through increasing neural gain and precision. I.e., a Bayesian model presupposes that the brain is undergoing exactly these kinds of fluctuations and makes steps to produce optimal behavior in the face of such noise.

Likewise the Bayesian model is fully hierarchical- at all levels of the system the local neural function is constrained and configured by predictions and error signals from the levels above and below it. In this sense, global dynamical phenomena like neuromodulation structure prediction in ways that constrain local dynamics.  These relationships can be fully non-linear and dynamical in nature (See Friston 2009 for review). Of the other bits –  evolution and individual differences, Karl would surely say that the former leads to variation in first priors and the latter is the product of agents optimizing their behavior in a variable world.

So there you have it- enactivist cognitive neuroscience is essentially Bayesian neuroscience. If I want to fulfill Gallagher et al’s prescriptions, I need merely use resting state, connectivity, and predictive coding analysis schemes. Yet somehow I think this isn’t quite what they meant- and there for me, lies the true tension in ‘enactive’ cognitive neuroscience. But maybe it is- Andy Clark recently went Bayesian, claiming that extended cognition and predictive coding are totally compatible. Maybe it’s time to put away the knives and stop arguing about representations. Yet I think an important tension remains: can we explain all the things Gallagher et al list as important using prior and posterior probabilities? I’m not totally sure, but I do know one thing- these concepts make it a hell of a lot easier to actually analyze and interpret my data.

fake edit:

I said I’d discuss DST, but ran out of space and time. My problem with DST boils down to this: it’s descriptive, not predictive. As a scientist it is not clear to me how one actually applies DST to a given experiment. I don’t see any kind of functional ontology emerging by which to apply the myriad of DST measures in a principled way. Mental chronometry may be hokey and old fashioned, but it’s easy to understand and can be applied to data and interpreted readily. This is a huge limitation for a field as complex as neuroscience, and as rife with bad data. A leading dynamicist once told me that in his entire career “not one prediction he’d made about (a DST measure/experiment) had come true, and that to apply DST one just needed to “collect tons of data and then apply every measure possible until one seemed interesting”. To me this is a data fishing nightmare and does not represent a reliable guide to empirical discovery.