If you follow my social media activities I am sure by now that you know me as a compulsive share-addict. Over the past four years I have gradually increased both the amount of incoming and outgoing information I attempt to integrate on a daily basis. I start every day with a now routine ritual of scanning new publications from 60+ journals and blogs using my firehose RSS feed, as well as integrating new links from various Science sub-reddits, my curated twitter cogneuro list, my friends and colleagues on Facebook, and email lists. I then in turn curate the best, most relevant to my interests, or in some cases the most outrageous of these links and share them back to twitter, facebook, reddit, and colleagues.
Of course in doing so, a frequent response from (particularly more senior) colleagues is: why?! Why do I choose to spend the time to both take in all that information and to share it back to the world? The answer is quite simple- in sharing this stuff I get critical feedback from an ever-growing network of peers and collaborators. I can’t even count the number of times someone has pointed out something (for better or worse) that I would have otherwise missed in an article or idea. That’s right, I share it so I can see what you think of it! In this way I have been able to not only stay up to date with the latest research and concepts, but to receive constant invaluable feedback from all of you lovely brains :). In some sense I literally distribute my cognition throughout my network – thanks for the extra neurons!
From the beginning, I have been able not only to assess the impact of this stuff, but also gain deeper and more varied insights into its meaning. When I began my PhD I had the moderate statistical training of a BSc in psychology with little direct knowledge of neuroimaging methods or theory. Frankly it was bewildering. Just figuring out which methods to pay attention to, or what problems to look out for, was a headache-inducing nightmare. But I had to start somewhere and so I started by sharing, and sharing often. As a result almost every day I get amazing feedback pointing out critical insights or flaws in the things I share that I would have otherwise missed. In this way the entire world has become my interactive classroom! It is difficult to overstate the degree to which this interaction has enriched my abilities as a scientists and thinker.
It is only natural however for more senior investigators to worry about how much time one might spend on all this. I admit in the early days of my PhD I may have spent a bit too long lingering amongst the RSS trees and twitter swarms. But then again, it is difficult to place a price on the knowledge and know-how I garnered in this process (not to mention the invaluable social capital generated in building such a network!). I am a firm believer in “power procrastination”, which is just the process of regularly switching from more difficult but higher priority to more interesting but lower priority tasks. I believe that by spending my downtime taking in and sharing information, I’m letting my ‘default mode’ take a much needed rest, while still feeding it with inputs that will actually make the hard tasks easier.
In all, on a good day I’d say I spend about 20 minutes each morning taking in inputs and another 20 minutes throughout the day sharing them. Of course some days (looking at you Fridays) I don’t always adhere to that and there are those times when I have to ‘just say no’ and wait until the evening to get into that workflow. Productivity apps like Pomodoro have helped make sure I respect the balance when particularly difficult tasks arise. All in all however, the time I spend sharing is paid back tenfold in new knowledge and deeper understanding.
Really I should be thanking all of you, the invaluable peers, friends, colleagues, followers, and readers who give me the feedback that is so totally essential to my cognitive evolution. So long as you keep reading- I’ll keep sharing! Thanks!!
Notes: I haven’t even touched on the value of blogging and post-publication peer review, which of course sums with the benefits mentioned here, but also has vastly improved my writing and comprehension skills! But that’s a topic for another post!
( don’t worry, the skim-share cycle is no replacement for deep individual learning, which I also spend plenty of time doing!)
“you are a von economo neuron!” – Francesca :)
Fun fact – I read the excellent scifi novel Accelerando just prior to beginning my PhD. In the novel the main character is an info-addict who integrates so much information he gains a “5 second” prescience on events as they unfold. He then shares these insights for free with anyone who wants them, generating billion dollar companies (of which he owns no part in) and gradually manipulating global events to bring about a technological singularity. I guess you could say I found this to be a pretty neat character :) In a serious vein though, I am a firm believer in free and open science, self-publication, and sharing-based economies. Information deserves to be free!
Thanks to the wonders of social media, while I was out grocery shopping I received several interesting and useful responses to my previous post on the relationship between multivariate pattern analysis and simulation theory. Rather than try and fit my responses into 140 characters, I figured i’d take a bit more space here to hash them out. I think the idea is really enhanced by these responses, which point to several findings and features of which I was not aware. The short answer seems to be, no MVPA does not invalidate simulation theory (ST) and may even provide evidence for it in the realm of motor intentions, but that we might be able to point towards a better standard of evidence for more exploratory applications of ST (e.g. empathy-for-pain). An important point to come out of these responses as one might expect, is that the interpretation of these methodologies is not always straightforward.
I’ll start with Antonia Hamilton’s question, as it points to a bit of literature that speaks directly to the issue:
Antonia is referring to this paper by Oosterhof and colleagues, where they directly compare passive viewing and active performance of the same paradigm using decoding techniques. I don’t read nearly as much social cognition literature as I used to, and wasn’t previously aware of this paper. It’s really a fascinating project and I suggest anyone interested in this issue read it at once (it’s open access, yay!). In the introduction the authors point out that spatial overlap alone cannot demonstrate equivalent mechanisms for viewing and performing the same action:
Numerous functional neuroimaging studies have identified brain regions that are active during both the observation and the execution of actions (e.g., Etzel et al. 2008; Iacoboni et al. 1999). Although these studies show spatial overlap of frontal and parietal activations elicited by action observation and execution, they do not demonstrate representational overlap between visual and motor action representations. That is, spatially overlapping activations could reflect different neural populations in the same broad brain regions (Gazzola and Keysers 2009; Morrison and Downing 2007; Peelen and Downing 2007b). Spatial overlap of activations per se cannot establish whether the patterns of neural response are similar for a given action (whether it is seen or performed) but different for different actions, an essential property of the “mirror system” hypothesis.”
They then go on to explain that while MVPA could conceivably demonstrate a simulation-like mechanism (i.e. a common neural representation for viewing/doing), several previous papers attempting to show just that failed to do so. The authors suggest that this may be due to a variety of methodological limitations, which they set out to correct for in their JNPhys publication. Oosterhof et al show that clusters of voxels located primarily in the intraparietal and superior temporal sulci encode cross-modal information, that is code similar information both when viewing and doing:
Essentially Oosterhof et al trained their classifier on one modality (see or do) , tested the classifier on the opposite modality in another session, and then repeated this procedure for all possible combinations of session and modality (while appropriately correcting for multiple comparisons). The map above represents the combined classification accuracy from both train-test combinations; interestingly in the supplementary info they show that the maps do slightly differ depend on what was trained:
Oosterhof and colleagues also investigate the specificity of information for particular gestures in a second experiment, but for our purposes lets focus on just the first. My first thought is that this does actually provide some evidence for a simulation theory of understanding motor intentions. Clearly there is enough information in each modality to accurately decode the opposite modality: there are populations of neurons encoding similar information both for action execution and perception. Realistically I think this has to be the minimal burden of proof needed to consider an imaging finding to be evidence for simulation theory. So the results of Oosterhof et al do provide supporting evidence for simulation theory in the domain of motor intentions.
Nonetheless, the results also strengthen the argument that more exploratory extentions of ST (like empathy-for-pain) must be held to a similar burden of proof before generalization in these domains is supported. Simply showing spatial overlap is not evidence of simulation, as Oosterhof themselves argue. I think it is interesting to note the slight spatial divergence between the two train-test maps (see on do, do on see). While we can obviously identify voxels encoding cross-modality information, it is interesting that those voxels do not subsume the entirety of whatever neural computation relates these two modalities; each has something unique to predict in the other. I don’t think that observation invalidates simulation theory, but it might suggest an interesting mechanism not specified in the ‘vanilla’ flavor of ST. To be extra boring, it would be really nice to see an independent replication of this finding, since as Oosterhof themselves point out, the evidence for cross-modal information is inconsistent across studies. Even though the classifier performs well above chance in this study, it is also worth noting that the majority of surviving voxels in their study show somewhere around 40-50% classification accuracy, not exactly gangbusters. It would be interesting to see if they could identify voxels within these regions that selectively encode only viewing or performing; this might be evidence for a hybrid-theory account of motor intentions.
Leonhard’s question is an interesting one that I don’t have a ready response for. As I understand it, the idea is that demonstrating no difference of patterns between a self and other-related condition (e.g. performing an action vs watching someone else do it) might actually be an argument for simulation, since this could be caused by that region using isomorphic computations for both conditions. This an interesting point – i’m not sure what the status of null findings is in the decoding literature, but this merits further thought.
The next two came from James Kilner and Tal Yarkoni. I’ve put them together as I think they fall under a more methodological class of questions/comments and I don’t feel quite experienced enough to answer them- but i’d love to hear from someone with more experience in multivariate/multivoxel techniques:
James Kilner asks about the performance of MVPA in the case that the pattern might be spatially overlapping but not identical for two conditions. This is an interesting question and i’m not sure I know the correct answer; my intuition is that you could accurately discriminate both conditions using the same voxels and that this would be strong evidence against a simple simulation theory account (spatial overlap but representational heterogeneity).
Here is more precise answer to James’ question from Sam Schwarzkopf, posted in the comments of the original post:
2. The multivariate aspect obviously adds sensitivity by looking at pattern information, or generally any information of more than one variable (e.g. voxels in a region). As such it is more sensitive to the information content in a region than just looking at the average response from that region. Such an approach can reveal that region A contains some diagnostic information about an experimental variable while region B does not, even though they both show the same mean activation. This is certainly useful knowledge that can help us advance our understanding of the brain – but in the end it is still only one small piece in the puzzle. And as both Tal and James pointed out (in their own ways) and as you discussed as well, you can’t really tell what the diagnostic information actually represents.
Conversely, you can’t be sure that just because MVPA does not pick up diagnostic information from a region that it therefore doesn’t contain any information about the variable of interest. MVPA can only work as long as there is a pattern of information within the features you used.
This last point is most relevant to James’ comment. Say you are using voxels as features to decode some experimental variable. If all the neurons with different tuning characteristics in an area are completely intermingled (like orientation-preference in mouse visual cortex for instance) you should not really see any decoding – even if the neurons in that area are demonstrably selective to the experimental variable.
In general it is clear that the interpretation of decoded patterns is not straightforward- it isn’t clear precisely what information they reflect, and it seems like if a region contained a totally heterogeneous population of neurons you wouldn’t pick up any decoding at all. With respect to ST, I don’t know if this completely invalidates our ability to test predictions- I don’t think one would expect such radical heterogeneity in a region like STS, but rather a few sub-populations responding selectively to self and other, which MVPA might be able to reveal. It’s an important point to consider though.
Tal’s point is an important one regarding the different sources of information that GLM and MVPA techniques pick up. The paper he refers to by Jimura and Poldrack set out to investigate exactly this by comparing the spatial conjunction and divergent sensitivity of each method. Importantly they subtracted the mean of each beta-coefficient from the multivariate analysis to insure that the analysis contained only information not in the GLM:
As you can see in the above, Jimura and Poldrack show that MVPA picks up a large number of voxels not found in the GLM analysis. Their interpretation is that the GLM is designed to pick up regions responding globally or in most cases to stimulation, whereas MVPA likely picks up globally distributed responses that show variance in their response. This is a bit like the difference between functional integration and localization; both are complementary to the understanding of some cognitive function. I take Tal’s point to be that the MVPA and GLM are sensitive to different sources of information and that this blurs the ability of the technique to evaluate simulation theory- you might observe differences between the two that would resemble evidence against ST (different information in different areas) when in reality you would be modelling altogether different aspects of the cognition. edit: after more discussion with Tal on Twitter, it’s clear that he meant to point out the ambiguity inherent in interpreting the predictive power of MVPA; by nature these analyses will pick up a lot of confounding a causal noise- arousal, reaction time, respiration, etc, which would be excluded in a GLM analysis. So these are not necessarily or even likely to be “direct read-outs” of representations, particularly to the extent that such confounds correlate with the task. See this helpful post by neuroskeptic for an overview of one recent paper examining this issue. See here for a study investigating the complex neurovascular origins of MVPA for fMRI.
Thanks sincerely for these responses, as it’s been really interesting and instructive for me to go through these papers and think about their implications. I’m still new to these techniques and it is exciting to gain a deeper appreciation of the subtleties involved in their interpretation. On that note, I must direct you to check out Sam Schwarzkopf’s excellent reply to my original post. Sam points out some common misunderstandings (of which I am perhaps guilty of several) regarding the interpretation of MVPA/decoding versus GLM techniques, arguing essentially that they pick up much of the same information and can both be considered ‘decoding’ in some sense, further muddying their ability to resolves debates like that surrounding simulation theory.
I was asked to write a brief summary of my PhD research for our annual CFIN report. I haven’t blogged in a while and it turned out to be a decent little blurb, so I figured I might as well share it here. Enjoy!
In the past decade, reports concerning the natural plasticity of the human brain have taken a spotlight in the media and popular imagination. In the pursuit of neural plasticity nearly every imaginable specialization, from taxi drivers to Buddhist monks, has had their day in the scanner. These studies reveal marked functional and structural neural differences between various populations of interest, and in doing so drive a wave of interest in harnessing the brain’s plasticity for rehabilitation, education, and even increasing intelligence (Green and Bavelier, 2008). Under this new “mental training” research paradigm investigators are now examining what happens to brain and behavior when novices are randomized to a training condition, using longitudinal brain imaging.
These studies highlight a few promising domains for harnessing neural plasticity, particularly in the realm of visual attention, cognitive control, and emotional training. By randomizing novices to a brief ‘dose’ of action video game or meditation training, researchers can go beyond mere cross-section and make inferences regarding the causality of training on observed neural outcomes. Initial results are promising, suggesting that domains of great clinical relevance such as emotional and attentional processing are amenable to training (Lutz et al., 2008a; Lutz et al., 2008b; Bavelier et al., 2010). However, these findings are currently obscured by a host of methodological limitations.
These span from behavioral confounds (e.g. motivation and demand characteristic) to inadequate longitudinal processing of brain images, which present particular challenges not found in within-subject or cross-sectional design (Davidson, 2010; Jensen et al., 2011). The former can be addressed directly by careful construction of “active control” groups. Here both comparison and control groups receive putatively effective treatments, carefully designed to isolate the hypothesized “active-ingredients” involved in behavioral and neuroplasticity outcomes. In this way researchers can simultaneously make inferences in terms of mechanistic specificity while excluding non-specific confounds such as social support, demand, and participant motivation.
We set out to investigate one particularly popular intervention, mindfulness meditation, while controlling for these factors. Mindfulness meditation has enjoyed a great deal of research interest in recent years. This popularity is largely due to promising findings indicating good efficacy of meditation training (MT) for emotion processing and cognitive control (Sedlmeier et al., 2012). Clinical studies indicate that MT may be particularly effective for disorders that are typically non-responsive to cognitive-behavioral therapy, such as severe depression and anxiety (Grossman et al., 2004; Hofmann et al., 2010). Understanding the neural mechanism underlying such benefits remains difficult however, as most existing investigations are cross-sectional in nature or depend upon inadequate “wait-list” passive control groups.
We addressed these difficulties in an investigation of functional and structural neural plasticity before and after a 6-week active-controlled mindfulness intervention. To control demand, social support, teacher enthusiasm, and participant motivation we constructed a “shared reading and listening” active control group for comparison to MT. By eliciting daily “experience samples” regarding participants’ motivation to practice and minutes practiced, we ensured that groups did not differ on common motivational confounds.
We found that while both groups showed equivalent improvement on behavioral response-inhibition and meta-cognitive measures, only the MT group significantly reduced affective-Stroop conflict reaction times (Allen et al., 2012). Further we found that MT participants show significantly greater increases in recruitment of dorsolateral prefrontal cortex than did controls, a region implicated in cognitive control and working memory. Interestingly we did not find group differences in emotion-related reaction times or BOLD activity; instead we found that fronto-insula and medial-prefrontal BOLD responses in the MT group were significantly more correlated with practice than in controls. These results indicate that while brief MT is effective for training attention-related neural mechanisms, only participants with the greatest amount of practice showed altered neural responses to negative affective stimuli. This result is important because it underlines the differential response of various target skills to training and suggests specific applications of MT depending on time and motivation constraints.
In a second study, we utilized a longitudinally optimized pipeline to assess structural neuroplasticity in the same cohort as described above (Ashburner and Ridgway, 2012). A crucial issue in longitudinal voxel-based morphometry and similar methods is the prevalence of “asymmetrical preprocessing”, for example where normalization parameters are calculated from baseline images and applied to follow-up images, resulting in inflated risk of false-positive results. We thus applied a totally symmetrical deformation-based morphometric pipeline to assess training related expansions and contractions of gray matter volume. While we found significant increases within the MT group, these differences did not survive group-by-time comparison and thus may represent false positives; it is likely that such differences would not be ruled out by an asymmetric pipeline or non-active controlled designed. These results suggest that brief MT may act only on functional neuroplasticity and that greater training is required for more lasting anatomical alterations.
These projects are a promising advance in our understanding of neural plasticity and mental training, and highlight the need for careful methodology and control when investigating such phenomena. The investigation of neuroplasticity mechanisms may one day revolutionize our understanding of human learning and neurodevelopment, and we look forward to seeing a new wave of carefully controlled investigations in this area.
You can read more about the study in this blog post, where I explain it in detail.
Allen M, Dietz M, Blair KS, van Beek M, Rees G, Vestergaard-Poulsen P, Lutz A, Roepstorff A (2012) Cognitive-Affective Neural Plasticity following Active-Controlled Mindfulness Intervention. The Journal of Neuroscience 32:15601-15610.
Ashburner J, Ridgway GR (2012) Symmetric diffeomorphic modeling of longitudinal structural MRI. Frontiers in neuroscience 6.
Bavelier D, Levi DM, Li RW, Dan Y, Hensch TK (2010) Removing brakes on adult brain plasticity: from molecular to behavioral interventions. The Journal of Neuroscience 30:14964-14971.
Davidson RJ (2010) Empirical explorations of mindfulness: conceptual and methodological conundrums. Emotion 10:8-11.
Green C, Bavelier D (2008) Exercising your brain: a review of human brain plasticity and training-induced learning. Psychology and Aging; Psychology and Aging 23:692.
Grossman P, Niemann L, Schmidt S, Walach H (2004) Mindfulness-based stress reduction and health benefits: A meta-analysis. Journal of Psychosomatic Research 57:35-43.
Hofmann SG, Sawyer AT, Witt AA, Oh D (2010) The effect of mindfulness-based therapy on anxiety and depression: A meta-analytic review. Journal of consulting and clinical psychology 78:169.
Jensen CG, Vangkilde S, Frokjaer V, Hasselbalch SG (2011) Mindfulness training affects attention—or is it attentional effort?
Lutz A, Brefczynski-Lewis J, Johnstone T, Davidson RJ (2008a) Regulation of the neural circuitry of emotion by compassion meditation: effects of meditative expertise. PLoS One 3:e1897.
Lutz A, Slagter HA, Dunne JD, Davidson RJ (2008b) Attention regulation and monitoring in meditation. Trends Cogn Sci 12:163-169.
Sedlmeier P, Eberth J, Schwarz M, Zimmermann D, Haarig F, Jaeger S, Kunze S (2012) The psychological effects of meditation: A meta-analysis.
As soon as I saw this it clicked: we need papester. We need a simple browser plugin that can recognize, download and re-upload any research document automatically (think zotero) to BitTorrent (this was Aaron’s original idea, just crowdsourced). These would then be automatically turned into torrents with an associated magnet link. The plugin would interact with a lightweight torrent client, using a set limit of your bandwidth (say 5%) to constantly seed back any files you have in your (zotero) library folder. Also, it would automatically use part of the bandwidth to seed missing papers (first working through a queue of DOIs of papers that were searched for by others and then just for any missing paper in reverse chronological order), so that over time all papers would be on BitTorrent. The links would be archived by google; any search engine could then find them and the plug-in would show the PDF download link.
Once this system is in place, a pirate-bay/reddit mash-up could help sort the magnet links as a meta-data rich papester torrent tracker. Users could posts comments and reviews, which would themselves be subject to karma. Over time a sorting algorithm could give greater weight to reviews from authors who consistently review unretracted papers, creating a kind of front page where “hot” would give you the latest research and “lasting” would give you timeless classics. Separating the sorting mechanism – which can essentially be any tracker – and the rating/meta-data system ensures that neither can be easily brought down. If users wish they could compile independent trackers for particular topics or highly rated papers, form review committees, and request new experiments to address flagged issues in existing articles. In this way we would ensure not only an everlasting and loss-protected research database, but irreversibly push academic publishing into an open-access and democratic review system. Students and people without access to scientific knowledge could easily find forgotten classics and the latest buzz with a simple sort. We need an “research-reddit” rating layer – why not solve Open Access and peer review in one go?
Is this feasible? There are about 50 million papers in existence. If we estimate about 500 kilobytes on average per paper, that’s 25 million MB of data, or 25 terabytes. While that may sound like a lot, remember that most torrent trackers already list much more data than this and that available bandwidth increases annually. If we can archive a ROM of every videogame created, why not papers? The entire collection of magnet links could take up as little as 1GB of data, making it easy to periodically back up the archive, ensure the system is resilient to take-downs, and re-seed less known or sought after papers. Just imagine it- all of our knowledge stored safely in an completely open collection, backed by the power of the swarm, organized by reviews, comments, and ratings, accessible to all. It would revolutionize the way we learn and share knowledge.
Of course there would be ruthless resistance to this sort of thing from publishers. It would be important to take steps to protect yourself, perhaps through TOR. The small size of the files would facilitate better encryption. When universities inevitably move to block uploads, tags could be used to later upload acquired files quickly on a public-wifi hotspot. There are other benefits as well- currently there are untold numbers of classic papers available online in reference only. What incentive is there for libraries to continue scanning these? A papester-backed uploader karma system could help bring thousands of these documents irreversibly into the fold. Even in the case that publishers found some way to stifle the system, as with Napster the damage would be done. Just as we were pushed irrevocably towards new forms of music consumption – direct download, streaming, donate-to-listen – big publishers would be forced toward an open access model to recover costs. Finally such a system might move us closer to a self-publishing ARXIV model. In the case that you couldn’t afford open access, you could self-publish your own PDF to the system. User reviews and ratings could serve as a first layer of feedback for you to improve the article. The idea or data – with your name behind it – would be out there fast and free.
Another cool feature would be a DOI search. When a user searches for a paper that isn’t available, papster would automatically add that paper to a request queue.
This is a thought experiment about an illegal solution and it’s possible consequences and benefits. Do with it what you will but recognize the gap between the theoretical and the actual!
First, let me apologize for an overlong hiatus from blogging. I submitted my PhD thesis October 1st, and it turns out that writing two papers and a thesis in the space of about three months can seriously burn out the old muse. I’ve coaxed her back through gentle offerings of chocolate, caffeine, and a bit of videogame binging. As long as I promise not to bring her within a mile of a dissertation, I believe we’re good for at least a few posts per month.
With that taken care of, I am very happy to report the successful publication of my first fMRI paper, published last month in the Journal of Neuroscience. The paper was truly a labor of love taking nearly 3 years to complete and countless hours of head-scratching work. In the end I am quite happy with the finished product, and I do believe my colleagues and I managed to produce a useful result for the field of mindfulness training and neuroplasticity.
note: this post ended up being quite long. if you are already familiar with mindfulness research, you may want to skip ahead!
First, depending on what brought you here, you may already be wondering why mindfulness is an interesting subject, particularly for a cognitive neuroscientist. In light of the large gaps regarding our understanding of the neurobiological foundations of neuroimaging, is it really the right time to apply these complex tools to meditation? Can we really learn anything about something as potentially ambiguous as “mindfulness”? Although we have a long way to go, and these are certainly fair questions, I do believe that the study of meditation has a lot to contribute to our understanding of cognition and plasticity.
Generally speaking, when you want to investigate some cognitive phenomena, a firm understanding of your target is essential to successful neuroimaging. Areas with years of behavioral research and concrete theoretical models make for excellent imaging subjects, as in these cases a researcher can hope to fall back on a sort of ‘ground truth’ to guide them through the neural data, which are notoriously ambiguous and difficult to interpret. Of course well-travelled roads also have their disadvantages, sometimes providing a misleading sense of security, or at least being a bit dry. While mindfulness research still has a ways to go, our understanding of these practices is rapidly evolving.
At this point it helps to stop and ask, what is meditation (and by extension, mindfulness)? The first thing to clarify is that there is no such thing as “meditation”- rather meditation is really term describing a family resemblance of highly varied practices, covering an array of both spiritual and secular practices. Meditation or “contemplative” practices have existed for more than a thousand years and are found in nearly every spiritual tradition. More recently, here in the west our unending fascination of the esoteric has lead to a popular rise in Yoga, Tai Chi, and other physically oriented contemplative practices, all of which incorporate an element of meditation.
At the simplest level of description [mindfulness] meditation is just a process of becoming aware, whether through actual sitting meditation, exercise, or daily rituals. Meditation (as a practice) was first popularized in the west during the rise of transcendental meditation (TM). As you can see in the figure below, interest in TM lead to an early boom in research articles. This boom was not to last, as it was gradually realized that much of this initially promising research was actually the product of zealous insiders, conducted with poor controls and in some cases outright data fabrication. As TM became known as a cult, meditation research underwent a dark age where publishing on the topic could seriously damage a research career. We can see also that around the 1990’s, this trend started to reverse as a new generation of researchers began investigating “mindfulness” meditation.
It’s easy to see from the above why when Jon Kabat-Zinn re-introduced meditation to the West, he relied heavily on the medical community to develop a totally secularized intervention-oriented version of meditation strategically called “mindfulness-based stress reduction.” The arrival of MBSR was closely related to the development of mindfulness-based cognitive therapy (MBCT), a revision of cognitive-behavioral therapy utilizing mindful practices and instruction for a variety of clinical applications. Mindfulness practice is typically described as involving at least two practices; focused attention (FA) and open monitoring (OM). FA can be described as simply noticing when attention wanders from a target (the breath, the body, or a flower for example) and gently redirecting it back to that target. OM is typically (but not always) trained at an later stage, building on the attentional skills developed in FA practice to gradually develop a sense of “non-judgmental open awareness”. While a great deal of work remains to be done, initial cognitive-behavioral and clinical research on mindfulness training (MT) has shown that these practices can improve the allocation of attentional resources, reduce physiological stress, and improve emotional well-being. In the clinic MT appears to effectively improve symptoms on a variety of pathological syndromes including anxiety and depression, at least as well as standard CBT or pharmacological treatments.
Has the quality of research on meditation improved since the dark days of TM? When answering this question it is important to note two things about the state of current mindfulness research. First, while it is true that many who research MT are also practitioners, the primary scholars are researchers who started in classical areas (emotion, clinical psychiatry, cognitive neuroscience) and gradually became involved in MT research. Further, most funding today for MT research comes not from shady religious institutions, but from well-established funding bodies such as the National Institute of Health and European Research Council. It is of course important to be aware of the impact prior beliefs can have on conducting impartial research, but with respect to today’s meditation and mindfulness researchers, I believe that most if not all of the work being done is honest, quality research.
However, it is true that much of the early MT research is flawed on several levels. Indeed several meta-analyses have concluded that generally speaking, studies of MT have often utilized poor design – in one major review only 8/22 studies met criteria for meta-analysis. The reason for this is quite simple- in the absence of pilot data, investigators had to begin somewhere. Typically it doesn’t bode well to jump into unexplored territory with an expensive, large sample, fully randomized design. There just isn’t enough to go off of- how would you know which kind of process to even measure? Accordingly, the large majority of mindfulness research to date has utilized small-scale, often sub-optimal experimental design, sacrificing experimental control in order build a basic idea of the cognitive landscape. While this exploratory research provides a needed foundation for generating likely hypotheses, it is also difficult to make any strong conclusions so long as methodological issues remain.
Indeed, most of what we know about these mindfulness and neuroplasticity comes from studies of either advanced practitioners (compared to controls) or “wait-list” control studies where controls receive no intervention. On the basis of the findings from these studies, we had some idea how to target our investigation, but there remained a nagging feeling of uncertainty. Just how much of the literature would actually replicate? Does mindfulness alter attention through mere expectation and motivation biases (i.e. placebo-like confounds), or can MT actually drive functionally relevant attentional and emotional neuroplasticity, even when controlling for these confounds?
The name of the game is active-control
Research to date links mindfulness practices to alterations in health and physiology, cognitive control, emotional regulation, responsiveness to pain, and a large array of positive clinical outcomes. However, the explicit nature of mindfulness training makes for some particularly difficult methodological issues. Group cross-sectional studies, where advanced practitioners are compared to age-matched controls, cannot provide causal evidence. Indeed, it is always possible that having a big fancy brain makes you more likely to spend many years meditating, and not that meditating gives you a big fancy brain. So training studies are essential to verifying the claim that mindfulness actually leads to interesting kinds of plasticity. However, unlike with a new drug study or computerized intervention, you cannot simply provide a sugar pill to the control group. Double-blind design is impossible; by definition subjects will know they are receiving mindfulness. To actually assess the impact of MT on neural activity and behavior, we need to compare to groups doing relatively equivalent things in similar experimental contexts. We need an active control.
There is already a well-established link between measurement outcome and experimental demands. What is perhaps less appreciated is that cognitive measures, particularly reaction time, are easily biased by phenomena like the Hawthorne effect, where the amount of attention participants receive directly contributes to experimental outcome. Wait-lists simply cannot overcome these difficulties. We know for example, that simply paying controls a moderate performance-based financial reward can erase attentional reaction-time differences. If you are repeatedly told you’re training attention, then come experiment time you are likely expect this to be true and try harder than someone who has received no such instruction. The same is true of emotional tasks; subjects told frequently they are training compassion are likely to spend more time fixating on emotional stimuli, leading to inflated self-reports and responses.
I’m sure you can quickly see how it is extremely important to control for these factors if we are to isolate and understand the mechanisms important for mindfulness training. One key solution is active-control, that is providing both groups (MT and control) with a “treatment” that is at least nominally as efficacious as the thing you are interested in. Active-control allows you exclude numerous factors from your outcome, potentially including the role of social support, expectation, and experimental demands. This is exactly what we set out to do in our study, where we recruited 60 meditation-naïve subjects, scanned them on an fMRI task, randomized them to either six weeks of MT or active-control, and then measured everything again. Further, to exclude confounds relating to social interaction, we came up with a particularly unique control activity- reading Emma together.
Jane Austen as Active Control – theory of mind vs interoception
To overcome these confounds, we constructed a specialized control intervention. As it was crucial that both groups believed in their training, we needed an instructor who could match the high level of enthusiasm and experience found in our meditation instructors. We were lucky to have the help of local scholar Mette Stineberg, who suggested a customized “shared reading” group to fit our purposes. Reading groups are a fun, attention demanding exercise, with purported benefits for stress and well-being. While these claims have not been explicitly tested, what mattered most was that Mette clearly believed in their efficacy- making for a perfect control instructor. Mette holds a PhD in literature, and we knew that her 10 years of experience participating in and leading these groups would help us to exclude instructor variables from our results.
With her help, we constructed a special condition where participants completed group readings of Jane Austin’s Emma. A sensible question to ask at this point is – “why Emma?” An essential element of active control is variable isolation, or balancing your groups in such way that, with the exception of your hypothesized “active ingredient”, the two interventions are extremely similar. As MT is thought to depend on a particular kind of non-judgmental, interoceptive kind of attention, Chris and Uta Frith suggested during an early meeting that Emma might be a perfect contrast. For those of you who haven’t read the novel, the plot is brimming over with judgment-heavy theory-of-mind-type exposition. Mette further helped to ensure a contrast with MT by emphasizing discussion sessions focused on character motives. In this way we were able to ensure that both groups met for the same amount of time each week, with equivalently talented and passionate instructors, and felt that they were working towards something worthwhile. Finally, we made sure to let every participant know at recruitment that they would receive one of two treatments intended to improve attention and well-being, and that any benefits would depend upon their commitment to the practice. To help them practice at home, we created 20-minute long CD’s for both groups, one with a guided meditation and the other with a chapter from Emma.
Unlike previous active-controlled studies that typically rely on relaxation training, reading groups depend upon a high level of social-interaction. Reading together allowed us not only to exclude treatment context and expectation from our results, but also more difficult effects of social support (the “making new friends” variable). To measure this, we built a small website for participants to make daily reports of their motivation and minutes practiced that day. As you can see in the figure below, when we averaged these reports we found that not only did the reading group practice significantly more than those in MT, but that they expressed equivalent levels of motivation to practice. Anecdotally we found that reading-group members expressed a high level of satisfaction with their class, with a sub-group of about 8 even continued their meetings after our study concluded. The meditation group by comparison, did not appear to form any lasting social relationships and did not continue meeting after the study. We were very happy with these results, which suggest that it is very unlikely our results could be explained by unbalanced motivation or expectation.
Impact of MT on attention and emotion
After we established that active control was successful, the first thing to look at was some of our outside-the-scanner behavioral results. As we were interested in the effect of meditation on both attention and meta-cognition, we used an “error-awareness task” (EAT) to examine improvement in these areas. The EAT (shown below) is a typical “go-no/go” task where subjects spend most of their time pressing a button. The difficult part comes whenever a “stop-trial” occurs and subject must quickly halt their response. In the case where the subject fails to stop, they then have the opportunity to “fix” the error by pressing a second button on the trial following the error. If you’ve ever taken this kind of task, you know that it can be frustratingly difficult to stop your finger in time – the response becomes quite habitual. Using the EAT we examined the impact of MT on both controlling responses (a variable called “stop accuracy”), as well as also on meta-cognitive self-monitoring (percent “error-awareness”).
We started by looking for significant group by time interactions on stop accuracy and error-awareness, which indicate that score fluctuation on a measure was statistically greater in the treatment (MT) group than in the control group. In repeated-measures design, this type of interaction is your first indication that the treatment may have had a greater effect than the control group. When we looked at the data, it was immediately clear that while both groups improved over time (a ‘main effect’ of time), there was no interaction to be found:
While it is likely that much of the increase over time can be explained by test-retest effects (i.e. simply taking the test twice), we wanted to see if any of this variance might be explained by something specific to meditation. To do this we entered stop accuracy and error-awareness into a linear model comparing the difference of slope between each group’s practice and the EAT measures. Here we saw that practice predicted stop accuracy improvement only in the meditation group, and that the this relationship was statistically greater than in the reading group:
These results lead us to conclude that while we did not observe a treatment effect of MT on the error-awareness task, the presence of strong time effects and MT-only correlation with practice suggested that the improvements within each group may relate to the “active ingredients” of MT but reflect motivation-driven artifacts in the reading group. Sadly we cannot conclude this firmly- we’d have needed to include a third passive control group for comparison. Thankfully this was pointed out to us by a kind reviewer, who noted that this argument is kind of like having one’s cake and eating it, so we’ll restrict ourselves to arguing that the EAT finding serves as a nice validation of the active control- both groups improved on something, and a potential indicator of a stop-related treatment mechanism.
While the EAT served as a behavioral measure of basic cognitive processes, we also wanted to examine the neural correlates of attention and emotion, to see how they might respond to mindfulness training in our intervention. For this we partnered with Karina Blair at the National Institute of Mental Health to bring the Affective Stroop task (shown below) to Denmark .
The Affective Stroop Task (AST) depends on a basic “number-counting Stroop” to investigate the neural correlates of attention, emotion, and their interaction. To complete the task, your instruction is simply “count the number of numbers in the first display (of numbers), count the number of numbers in the second display, and decide which display had more number of numbers”. As you can see in the trial example above, conflict in the task (trial-type “C”) is driven by incongruence between the Arabic numeral (e.g. “4”) and the numeracy of the display (a display of 5 “4”’s). Meanwhile, each trial has nasty or neutral emotional stimuli selected from the international affective picture system. Using the AST, we were able to examine the neural correlates of executive attention by contrasting task (B + C > A) and emotion (negative > neutral) trials.
Since we were especially interested in changes over time, we expanded on these contrasts to examine increased or decreased neural response between the first and last scans of the study. To do this we relied on two levels of analysis (standard in imaging), where at the “first” or “subject level” we examined differences between the two time points for each condition (task and emotion), within each subject. We then compared these time-related effects (contrast images) between each group using a two-sample t-test with total minutes of practice as a co-variate. To assess the impact of meditation on performing the AST, we examined reaction times in a model with factors group, time, task, and emotion. In this way we were able to examine the impact of MT on neural activity and behavior while controlling for the kinds of artifacts discussed in the previous section.
Our analysis revealed three primary findings. First, the reaction time analysis revealed a significant effect of MT on Stroop conflict, or the difference between reaction time to incongruent versus congruent trials. Further, we did not observe any effect on emotion-related RTs- although both groups sped up significantly to negative trials vs neutral (time effect), this increase was equivalent in both groups. Below you can see the stroop-conflict related RTs:
This became particularly interesting when we examine the neural response to these conditions, and again observed a pattern of overall [BOLD signal] increases in the dorsolateral prefrontal cortex to task performance (below):
Interestingly, we did not observe significant overall increases to emotional stimuli just being in the MT group didn’t seem to be enough to change emotional processing. However, when we examined correlations with amount practice and increased BOLD to negative emotion across the whole brain, we found a striking pattern of fronto-insular BOLD increases to negative images, similar to patterns seen in previous studies of compassion and mindfulness practice:
When we put all this together, a pattern began to emerge. Overall it seemed like MT had a relatively clear impact on attention and cognitive control. Practice-correlated increases on EAT stop accuracy, reduced Affective Stroop conflict, and increases in dorsolateral prefrontal cortex responses to task all point towards plasticity at the level of executive function. In contrast our emotion-related findings suggest that alterations in affective processing occurred only in MT participants with the most practice. Given how little we know about the training trajectories of cognitive vs affective skills, we felt that this was a very interesting result.
Conclusion: the more you do, the what you get?
For us, the first conclusion from all this was that when you control for motivation and a host of other confounds, brief MT appears to primarily train attention-related processes. Secondly, alterations in affective processing seemed to require more practice to emerge. This is interesting both for understanding the neuroscience of training and for the effective application of MT in clinical settings. While a great deal of future research is needed, it is possible that the affective system may be generally more resilient to intervention than attention. It may be the case that altering affective processes depends upon and extends increasing control over executive function. Previous research suggests that attention is largely flexible, amenable to a variety of training regimens of which MT is only one beneficial intervention. However we are also becoming increasingly aware that training attention alone does not seem to directly translate into closely related benefits.
As we begin to realize that many societal and health problems cannot be solved through medication or attention-training alone, it becomes clear that techniques to increase emotional function and well-being are crucial for future development. I am reminded of a quote overheard at the Mind & Life Summer Research Institute and attributed to the Dalai Lama. Supposedly when asked about their goal of developing meditation programs in the west, HHDL replied that, what was truly needed in the West was not “cognitive training, as (those in the west) are already too clever. What is needed rather is emotion training, to cultivate a sense of responsibility and compassion”. When we consider falling rates of empathy in medical practitioners and the link to health outcome, I think we do need to explore the role of emotional and embodied skills in supporting a wide-array of functions in cognition and well-being. While emotional development is likely to depend upon executive function, given all the recent failures to show a transfer from training these domains to even closely related ones, I suspect we need to begin including affective processes in our understanding of optimal learning. If these differences hold, then it may be important to reassess our interventions (mindful and otherwise), developing training programs that are customized in terms of the intensity, duration, and content appropriate for any given context.
Of course, rather than end on such an inspiring note, I should point out that like any study, ours is not without flaws (you’ll have to read the paper to find out how many ;) ) and is really just an initial step. We made significant progress in replicating common neural and behavioral effects of MT while controlling for important confounds, but in retrospect the study could have been strengthened by including measures that would better distinguish the precise mechanisms, for example a measure of body awareness or empathy. Another element that struck me was how much I wish we’d had a passive control group, which could have helped flesh out how much of our time effect was instrument reliability versus motivation. As far as I am concerned, the study was a success and I am happy to have done my part to push mindfulness research towards methodological clarity and rigor. In the future I know others will continue this trend and investigate exactly what sorts of practice are needed to alter brain and behavior, and just how these benefits are accomplished.
In the near-future, I plan to give mindfulness research a rest. Not that I don’t find it fascinating or worthwhile, but rather because during the course of my PhD I’ve become a bit obsessed with interoception and meta-cognition. At present, it looks like I’ll be spending my first post-doc applying predictive coding and dynamic causal modeling to these processes. With a little luck, I might be able to build a theoretical model that could one day provide novel targets for future intervention!
Going through my RSS backlog today, I was excited to see Kilpatrick et al.’s “Impact of Mindfulness-Based Stress Reduction Training on Intrinsic Brain Connectivity” appear in this week’s early view Neuroimage. Although I try to keep my own work focused on primary research in cognition and connectivity, mindfulness-training (MT) is a central part of my research. Additionally, there are few published findings on intrinsic connectivity in this area. Previous research has mainly focused on between-group differences in anatomical structure (gray and white matter for example) and task-related activity. A few more recent studies have gone as far as to randomize participants into wait-listed control and MT groups.
While these studies are interesting, they are of course limited in their scope by several factors. My supervisor Antoine Lutz emphasizes that in addition to our active-controlled research here in Århus, his group at Wisconsin-Madison and others are actively preparing such datasets. Active controls are simply ‘mock’ interventions (or real ones) designed to control for every possible aspect of being involved in an intervention (placebo, community, motivation) in order to isolate the variables specific to that treatment (in this case, meditation, but not sitting, breathing, or feeling special). Active controls are important as there is a great deal of research demonstrating that cognition itself is susceptible to placebo-like motivational effects. All and all, I’ve seen several active-controlled, cognitive-behavioral studies in review that suggest we should be strongly skeptical of any non-active controlled findings. While I can’t discuss these in detail, I will mention some of these issues in my review of the neuroimage manuscript. It suffices to say however, that if you are working on a passive-controlled study in this area, you had better get it out fast as you can expect reviewers to be greatly tightening their expectations in the coming months, as more and more rigorous papers appear. As Sara Lazar put it during my visit to her lab last summer “the low-hanging fruit of MBSR brain research are rapidly vanishing”. Overall this is a good thing for the community and we’ll see why in a moment.
Now let us turn to the paper at hand. Kilpatrick et al start with a standard summary of MBSR and rsfMRI research, focusing on findings indicating MBSR trains focused attention, sensory introspection/interception and perception. They briefly review now well-established findings indicating that rsfMRI is sensitive to training related changes, including studies that demonstrate the sensitivity of the resting state to conditions such as fatigue, eyes-open vs eyes-closed, and recent sleep. This is all pretty well and good, but I think it’s a bit odd when we see just how they collect their data.
Briefly, they recruited 32 healthy adults for randomization to MBSR and waitlist controls. Controls then complete the Mindfulness Attention Awareness Scale (MAAS) and receive 8 weeks of diary-logged standard MBSR training. After training, participants are recalled for the rsfMRI scan. An important detail here is that participants are not scanned before and after training, rendering the fMRI portion of the experiment closer to a cross-section than true longitudinal design. At the time of scan, the researchers then give two ‘task-free states’, with and without auditory white noise. The authors indicate that the noise condition is included “to enable new analysis methods not conducted here”, presumably to average out scanner-noise related affects. They later indicate no differences between the two conditions, which causes me to ask how much here is meditation vs focusing-on-scanner-noise specific. Finally, they administer the ‘task free’ states with a slight twist:
“”During this baseline scan of about 5 min, we would like you to again stay as still as possible and be mindfully aware of your surroundings. Please keep your eyes closed during this procedure. Continue to be mindfully aware of whatever you notice in your surroundings and your own sensations. Mindful awareness means that you pay attention to your present moment experience, in this case the changing sounds of the scanner/changing background sounds played through the headphones, and to bring interest and curiosity to how you are responding to them.”
While the manipulation makes sense given the experimenter’s hypothesis concerning sensory processing, an ongoing controversy in resting-state research is just what it is that constitutes ‘rest’. Research here suggests that functional connectivity is sensitive to task-instructions and variations in visual stimulation, and many complain about the lack of specificity within different rest conditions. Kilpatrick et al’s manipulation makes sense given that what they really want to see is meditation-related alterations, but it’s a dangerous leap without first establishing the relationship between ‘true rest’ and their ‘auditory meditation’ condition. Research on the impact of scanner-noise indicates some degree of noise-related nuisance effects, and also some functionally significant effects. If you’ve never been in an MR experiment, the scanner is LOUD. During my first scan I actually started feeling claustrophobic due to the oppressive machine-gun like noise of the gradient coil. Anyway, it’s really troubling that Kilpatrick et al don’t include a totally task-free set for comparison, and I’m hesitant to call this a resting-state finding without further clarification.
The study is extremely interesting, but it’s important to note it’s limitations:
Lack of active control- groups are not controlled for motivation.
No pre/post scan.
Novel resting state without comparison condition.
Findings are discussed as ‘training related’ without report of correlation with reported practice hours.
Anti-correlations reported with global-signal nuisance regression. No discussion of possible regression related inducement (see edit).
Discussion of findings is unclear; reported as greater DMN x Auditory correlation, but the independent component includes large portions of the salience network.
Ultimately they identify a “auditory/salience” independent component network (ICN) (primary auditory, STG, posterior Insula, ACC, and lateral frontal cortex) and then conduct seed-regression analyses of the network with areas of the DMN and Dorsal Attention Network (DAN). I find it highly strange that they pick up a network that seems to conflate primary sensory and salience regions, as do the researchers who state “Therefore, the ICN was labeled as “auditory/salience”. It is unclear why the components split differently in our sample, perhaps the instructions that brought attention to auditory input altered the covariance structure somewhat.” Given the lack of motivational control in the study, the issues in this study begin to pile onto one another and I am not sure what we can really conclude. They further find that the MBSR group demonstrates greater “auditory/salience x DMN connectivity”, “greater visual and auditory functional connectivity” (see image below). They also report several increased anti-correlations, between the aud/sal network, dMPFC and visual regions. I find this to be an extremely tantalizing finding as it would reflect a decrease in processing automaticity amongst the SAL, CEN, and DMN networks. There are some serious problems with these kinds of analysis that the authors don’t address, and so we again must reserve any strong conclusions. Here is what Kilpatrick et al conclude:
“The current findings extend the results of prior studies that showed meditation-related changes in specific brain regions active during attention and sensory processing by providing evidence that MBSR trained compared to untrained subjects, during a focused attention instruction, have increased connectivity within sensory networks and between regions associated with attentional processes and those in the attended sensory cortex. In addition they show greater differentiation between regions associated with attentional processes and the unattended sensory cortex as well as greater differentiation between attended and unattended sensory networks”
As is typical, the list of findings is quite long and I won’t bother re-stating it all here. Given the resting instructions it seems clear that the freshly post-MBSR participants are likely to have engaged a pretty dedicated set of cognitive operations during the scan. Yet it’s totally unclear what the control group would do given these contemplative instructions. Presumably they’d just lie in the scanner and try not to tune out the noise- but you can see here how it’s not clear that these conditions are really that comparable without having some idea of what’s going on. In essence what you (might) have here is one group actually doing something (meditation) and the other group not doing much at all. Ideally you want to see how training impacts the underlying process in a comparable way. Motivation has been repeatedly linked to BOLD signal intensity and in this case, it could very well be that these findings are simple artifacts of motivation to perform. If one group is actually practicing mindfulness and the other isn’t, you have not isolated the variable of interest. The authors could have somewhat alleviated this by including data from the additional pain task (“not reported here”) and/or at least giving us a correlation of the findings with the MAAS scale. I emphasize that I do find the findings of this paper interesting- they map extremely well onto my own hypotheses about how RSNs interact with mindfulness training, but that we must interpret them with caution.
Overall I think this was a project with a strong theoretical motivation and some very interesting ideas. One problem with looking at state-mindfulness in the scanner is the cramped, noisy environment. I think Kilpatrick et al had a great idea in their attempt to use the noise itself as a manipulation. Further, the findings make a good deal of sense. Still, given the above limitation, it’s important to be really careful with our conclusions. At best, this study warrants an extremely rigorous follow-up, and I wish neuroimage had published it with a bit more information, such as the status of any rest-MAAS correlations. Anyway, this post has gotten quite long and I think I’d best get back to work- for my next post I think I’ll go into more detail about some of the issues confront resting state (what is “rest”?) and mindfulness (role of active controls for community, motivation, and placebo effects) and what they mean for resting-state research.
edit: just realized I never explained limitation #5. See my “beautiful noise” slides (previous post) regarding the controversy of global signal regression and anti-correlation. Simply put, there is somewhat convincing evidence that this procedure (designed to eliminate low-frequency nuisance co-variates) may actually mathematically induce anti-correlations where none exist, probably due to regression to the mean. While it’s not a slam-dunk (see response by Fox et al), it’s an extremely controversial area and all anti-correlative findings should be interpreted in light of this possibility.
If you like this post please let me know in the comments! If I can get away with rambling about this kind of stuff, I’ll do so more frequently.