Neuroconscience

The latest thoughts, musings, and data in cognitive science and neuroscience.

Tag: methods

oh BOLD where art thou? Evidence for a “mm-scale” match between intracortical and fMRI measures.

A frequently discussed problem with functional magnetic resonance imaging is that we don’t really understand how the hemodynamic ‘activations’ measured by the technique relate to actual neuronal phenomenon. This is because fMRI measures the Blood-Oxygenation-Level Dependent (BOLD) signal, a complex vascular response to neuronal activity. As such, neuroscientists can easily get worried about all sorts of non-neural contributions to the BOLD signal, such as subjects gasping for air, pulse-related motion artefacts, and other generally uninteresting effects. We can even start to worry that out in the lab, the BOLD signal may not actually measure any particular aspect of neuronal activity, but rather some overly diluted, spatially unconstrained filter that simply lacks the key information for understanding brain processes.

Given that we generally use fMRI over neurophysiological methods (e.g. M/EEG) when we want to say something about the precise spatial generators of a cognitive process, addressing these ambiguities is of utmost importance. Accordingly a variety of recent papers have utilized multi-modal techniques, for example combining optogenetics, direct recordings, and FMRI, to assess particularly which kinds of neural events contribute to alterations in the BOLD signal and it’s spatial (mis)localization. Now a paper published today in Neuroimage addresses this question by combining high resolution 7-tesla fMRI with Electrocorticography (ECoG) to determine the spatial overlap of finger-specific somatomotor representations captured by the measures. Starting from the title’s claim that “BOLD matches neuronal activity at the mm-scale”, we can already be sure this paper will generate a great deal of interest.

From Siero et al (In Press)

As shown above, the authors managed to record high resolution (1.5mm) fMRI in 2 subjects implanted with 23 x 11mm intracranial electrode arrays during a simple finger-tapping task. Motor responses from each finger were recorded and used to generate somatotopic maps of brain responses specific to each finger. This analysis was repeated in both ECoG and fMRI, which were then spatially co-registered to one another so the authors could directly compare the spatial overlap between the two methods. What they found appears at first glance, to be quite impressive:
From Siero et al (In Press)

Here you can see the color-coded t-maps for the BOLD activations to each finger (top panel, A), the differential contrast contour maps for the ECOG (middle panel, B), and the maximum activation foci for both measures with respect to the electrode grid (bottom panel, C), in two individual subjects. Comparing the spatial maps for both the index and thumb suggests a rather strong consistency both in terms of the topology of each effect and the location of their foci. Interestingly the little finger measurements seem somewhat more displaced, although similar topographic features can be seen in both. Siero and colleagues further compute the spatial correlation (Spearman’s R) across measures for each individual finger, finding an average correlation of .54, with a range between .31-.81, a moderately high degree of overlap between the measures. Finally the optimal amount of shift needed to reduce spatial difference between the measures was computed and found to be between 1-3.1 millimetres, suggesting a slight systematic bias between ECoG and fMRI foci.

Are ‘We the BOLD’ ready to breakout the champagne and get back to scanning in comfort, spatial anxieties at ease? While this is certainly a promising result, suggesting that the BOLD signal indeed captures functionally relevant neuronal parameters with reasonable spatial accuracy, it should be noted that the result is based on a very-best-case scenario, and that a considerable degree of unique spatial variance remains for the two methods. The data presented by Siero and colleagues have undergone a number of crucial pre-processing steps that are likely to influence their results: the high degree of spatial resolution, the manual removal of draining veins, the restriction of their analysis to grey-matter voxels only, and the lack of spatial smoothing all render generalizing from these results to the standard 3-tesla whole brain pipeline difficult. Indeed, even under these best-case criteria, the results still indicate up to 3mm of systematic bias in the fMRI results. Though we can be glad the bias was systematic and not random– 3mm is still quite a lot in the brain. On this point, the authors note that the stability of the bias may point towards a systematic miss-registration of the ECoG and FMRI data and/or possible rigid-body deformations introduced by the implantation of the electrodes), issues that could be addressed in future studies. Ultimately it remains to be seen whether similar reliability can be obtained for less robust paradigms than finger wagging, obtained in the standard sub-optimal imaging scenarios. But for now I’m happy to let fMRI have its day in the sun, give or take a few millimeters.

Siero, J. C. W., Hermes, D., Hoogduin, H., Luijten, P. R., Ramsey, N. F., & Petridou, N. (2014). BOLD matches neuronal activity at the mm scale: A combined 7T fMRI and ECoG study in human sensorimotor cortex. NeuroImage. doi:10.1016/j.neuroimage.2014.07.002

 

When is expectation not a confound? On the necessity of active controls.

Learning and plasticity are hot topics in neuroscience. Whether exploring old world wisdom or new age science fiction, the possibility that playing videogames might turn us into attention superheroes or that practicing esoteric meditation techniques might heal troubled minds is an exciting avenue for research. Indeed findings suggesting that exotic behaviors or novel therapeutic treatments might radically alter our brain (and behavior) are ripe for sensational science-fiction headlines purporting vast brain benefits.  For those of you not totally bored of methodological crisis, here we have one brewing anew. You see the standard recommendation for those interested in intervention research is the active-controlled experimental design. Unfortunately in both clinical research on psychotherapy (including meditation) and more Sci-Fi areas of brain training and gaming, use of active controls is rare at best when compared to the more convenient (but causally ineffective) passive control group. Now a new article in Perspectives in Psychological Science suggests that even standard active controls may not be sufficient to rule out confounds in the treatment effect of interest.

Why is that? And why exactly do we need  active controls in the first place? As the authors clearly point out, what you want to show with such a study is the causal efficacy of the treatment of interest. Quite simply what that means is that the thing you think should have some interesting effect should actually be causally responsible for creating that effect. If you want to argue that standing upside down for twenty minutes a day will make me better at playing videogames in Australia, it must be shown that it is actually standing upside down that causes my increased performance down under. If my improved performance on Minecraft Australian Edition is simply a product of my belief in the power of standing upside down, or my expectation that standing upside down is a great way to best kangaroo-creepers, then we have no way of determining what actually produced that performance benefit. Research on placebos and the power of expectations shows that these kinds of subjective beliefs can have a big impact on everything from attentional performance to mortality rates.

Useful flowchart from Boot et al on whether or not a study can make causal claims for treatment.

Useful flowchart from Boot et al on whether or not a study can make causal claims for treatment.

Typically researchers attempt to control for such confounds through the use of a control group performing a task as similar as possible to the intervention of interest. But how do we know participants in the two groups don’t end up with different expectations about how they should improve as a result of the training? Boot et al point out that without actually measuring these variables, we have no idea and no way of knowing for sure that expectation biases don’t produce our observed improvements. They then provide a rather clever demonstration of their concern, in an experiment where participants view videos of various cognition tests as well as videos of a training task they might later receive, in this case either the first-person shooter Unreal Tournament or the spatial puzzle game Tetris. Finally they asked the participants in each group which tests they thought they’d do better on as a result of the training video. Importantly the authors show that not only did UT and Tetris lead to significantly different expectations, but also that those expectation benefits were specific to the modality of trained and tested tasks. Thus participant who watched the action-intensive Unreal Tournament videos expected greater improvements on tests of reaction time and visual performance, whereas participants viewing Tetris rated themselves as likely to do better on tests of spatial memory.

This is a critically important finding for intervention research. Many researchers, myself included, have often thought of the expectation and demand characteristic confounds in a rather general way. Generally speaking until recently I wouldn’t have expected the expectation bias to go much beyond a general “I’m doing something effective” belief. Boot et al show that our participants are a good deal cleverer than that, forming expectations-for-improvement that map onto specific dimensions of training. This means that to the degree that an experimenter’s hypothesis can be discerned from either the training or the test, participants are likely to form unbalanced expectations.

The good news is that the authors provide several reasonable fixes for this dilemma. The first is just to actually measure participant’s expectations, specifically in relation to the measures of interest. Another useful suggestion is to run pilot studies ensuring that the two treatments do not evoke differential expectations, or similarly to check that your outcome measures are not subject to these biases. Boot and colleagues throw the proverbial glove down, daring readers to attempt experiments where the “control condition” actually elicits greater expectations yet the treatment effect is preserved. Further common concerns, such as worries about balancing false positives against false negatives, are address at length.

The entire article is a great read, timely and full of excellent suggestions for caution in future research. It also brought something I’ve been chewing on for some time quite clearly into focus. From the general perspective of learning and plasticity, I have to ask at what point is an expectation no longer a confound. Boot et al give an interesting discussion on this point, in which they suggest that even in the case of balanced expectations and positive treatment effects, an expectation dependent response (in which outcome correlates with expectation) may still give cause for concern as to the causal efficacy of the trained task. This is a difficult question that I believe ventures far into the territory of what exactly constitutes the minimal necessary features for learning. As the authors point out, placebo and expectations effects are “real” products of the brain, with serious consequences for behavior and treatment outcome. Yet even in the medical community there is a growing understanding that such effects may be essential parts of the causal machinery of healing.

Possible outcome of a training experiment, in which the control shows no dependence between expectation and outcome (top panel) and the treatment of interest shows dependence (bottom panel). Boot et al suggest that such a case may invalidate causal claims for treatment efficacy.

Possible outcome of a training experiment, in which the control shows no dependence between expectation and outcome (top panel) and the treatment of interest shows dependence (bottom panel). Boot et al suggest that such a case may invalidate causal claims for treatment efficacy.

To what extent might this also be true of learning or cognitive training? For sure we can assume that expectations shape training outcomes, otherwise the whole point about active controls would be moot. But can one really have meaningful learning if there is no expectation to improve? I realize that from an experimental/clinical perspective, the question is not “is expectation important for this outcome” but “can we observe a treatment outcome when expectations are balanced”. Still when we begin to argue that the observation of expectation-dependent responses in a balanced design might invalidate our outcome findings, I have to wonder if we are at risk of valuing methodology over phenomena. If expectation is a powerful, potentially central mechanism in the causal apparatus of learning and plasticity, we shouldn’t be surprised when even efficacious treatments are modulated by such beliefs. In the end I am left wondering if this is simply an inherent limitation in our attempt to apply the reductive apparatus of science to increasingly holistic domains.

Please do read the paper, as it is an excellent treatment of a critically ignored issue in the cognitive and clinical sciences. Anyone undertaking related work should expect this reference to appear in reviewer’s replies in the near future.

EDIT:
Professor Simons, a co-author of the paper, was nice enough to answer my question on twitter. Simons pointed out that a study that balanced expectation, found group outcome differences, and further found correlations of those differences with expectation could conclude that the treatment was causally efficacious, but that it also depends on expectations (effect + expectation). This would obviously be superior to an unbalanced designed or one without measurement of expectation, as it would actually tell us something about the importance of expectation in producing the causal outcome. Be sure to read through the very helpful FAQ they’ve posted as an addendum to the paper, which covers these questions and more in greater detail. Here is the answer to my specific question:

What if expectations are necessary for a treatment to work? Wouldn’t controlling for them eliminate the treatment effect?

No. We are not suggesting that expectations for improvement must be eliminated entirely. Rather, we are arguing for the need to equate such expectations across conditions. Expectations can still affect the treatment condition in a double-blind, placebo-controlled design. And, it is possible that some treatments will only have an effect when they interact with expectations. But, the key to that design is that the expectations are equated across the treatment and control conditions. If the treatment group outperforms the control group, and expectations are equated, then something about the treatment must have contributed to the improvement. The improvement could have resulted from the critical ingredients of the treatment alone or from some interaction between the treatment and expectations. It would be possible to isolate the treatment effect by eliminating expectations, but that is not essential in order to claim that the treatment had an effect.

In a typical psychology intervention, expectations are not equated between the treatment and control condition. If the treatment group improves more than the control group, we have no conclusive evidence that the ingredients of the treatment mattered. The improvement could have resulted from the treatment ingredients alone, from expectations alone, or from an interaction between the two. The results of any intervention that does not equate expectations across the treatment and control condition cannot provide conclusive evidence that the treatment was necessary for the improvement. It could be due to the difference in expectations alone. That is why double blind designs are ideal, and it is why psychology interventions must take steps to address the shortcomings that result from the impossibility of using a double blind design. It is possible to control for expectation differences without eliminating expectations altogether.

Correcting your naughty insula: modelling respiration, pulse, and motion artifacts in fMRI

important update: Thanks to commenter “DS”, I discovered that my respiration-related data was strongly contaminated due to mechanical error. The belt we used is very susceptible to becoming uncalibrated, if the subject moves or breathes very deeply for example. When looking at the raw timecourse of respiration I could see that many subjects, included the one displayed here, show a great deal of “clipping” in the timeseries. For the final analysis I will not use the respiration regressors, but rather just the pulse and motion. Thanks DS!

As I’m working my way through my latest fMRI analysis, I thought it might be fun to share a little bit of that here. Right now i’m coding up a batch pipeline for data from my Varela-award project, in which we compared “adept” meditation practitioners with motivation, IQ, age, and gender-matched controls on a response-inhibition and error monitoring task. One thing that came up in the project proposal meeting was a worry that, since meditation practitioners spend so much time working with the breath, they might respirate differently either at rest or during the task. As I’ve written about before, respiration and other related physiological variables such as cardiac-pulsation induced motion can seriously impact your fMRI results (when your heart beats, the veins in your brain pulsate, creating slight but consistent and troublesome MR artifacts). As you might expect, these artifacts tend to be worse around the main draining veins of the brain, several of which cluster around the frontoinsular and medial-prefrontal/anterior cingulate cortices. As these regions are important for response-inhibition and are frequently reported in the meditation literature (without physiological controls), we wanted to try to control for these variables in our study.

disclaimer: i’m still learning about noise modelling, so apologies if I mess up the theory/explanation of the techniques used! I’ve left things a bit vague for that reason. See bottom of article for references for further reading. To encourage myself to post more of these “open-lab notes” posts, I’ve kept the style here very informal, so apologies for typos or snafus. :D

To measure these signals, we used the respiration belt and pulse monitor that come standard with most modern MRI machines. The belt is just a little elastic hose that you strap around the chest wall of the subject, where it can record expansions and contractions of the chest to give a time series corresponding to respiration, and the pulse monitor a standard finger clip. Although I am not an expert on physiological noise modelling, I will do my best to explain the basic effects you want to model out of your data. These “non-white” noise signals include pulsation and respiration-induced motion (when you breath, you tend to nod your head just slightly along the z-axis), typical motion artifacts, and variability of pulsation and respiration. To do this I fed my physiological parameters into an in-house function written by Torben Lund, which incorporates a RETROICOR transformation of the pulsation and respiration timeseries. We don’t just use the raw timeseries due to signal aliasing- the phsyio data needs to be shifted to make each physiological event correspond to a TR. The function also calculates the respiratory volume time delay (RVT), a measure developed by Rasmus Birn, to model the variability in physiological parameters1. Variability in respiration and pulse volume (if one group of subjects tend to inhale sharply for some conditions but not others, for example) is more likely to drive BOLD artifacts than absolute respiratory volume or frequency (if one group of subjects tend to inhale sharply for some conditions but not others, for example). Finally, as is standard, I included the realignment parameters to model subject motion-related artifacts. Here is a shot of my monster design matrix for one subject:

DM_NVR

You can see that the first 7 columns model my conditions (correct stops, unaware errors, aware errors, false alarms, and some self-report ratings), the next 20 model the RETROICOR transformed pulse and respiration timeseries, 41 columns for RVT, 6 for realignment pars, and finally my session offsets and constant. It’s a big DM, but since we have over 1000 degrees of freedom, i’m not too worried about all the extra regressors in terms of loss of power. What would be worrisome is if for example stop activity correlated strongly with any of the nuisance variables –  we can see from the orthogonality plot that in this subject at least, that is not the case. Now lets see if we actually have anything interesting left over after we remove all that noise:

stop SPM

We can see that the Stop-related activity seems pretty reasonable, clustering around the motor and premotor cortex, bilateral insula, and DLPFC, all canonical motor inhibition regions (FWE-cluster corrected p = 0.05). This is a good sign! Now what about all those physiological regressors? Are they doing anything of value, or just sucking up our power? Here is the f-contrast over the pulse regressors:

pulse

Here we can see that the peak signal is wrapped right around the pons/upper brainstem. This makes a lot of sense- the area is full of the primary vasculature that ferries blood into and out of the brain. If I was particularly interested in getting signal from the brainstem in this project, I could use a respiration x pulse interaction regressor to better model this6. Penny et al find similar results to our cardiac F-test when comparing AR(1) with higher order AR models [7]. But since we’re really only interested in higher cortical areas, the pulse regressor should be sufficient. We can also see quite a bit of variance explained around the bilateral insula and rostral anterior cingulate. Interestingly, our stop-related activity still contained plenty of significant insula response, so we can feel better that some but not all of the signal from that region is actually functionally relevant. What about respiration?

resp

Here we see a ton of variance explained around the occipital lobe. This makes good sense- we tend to just slightly nod our head back and forth along the z-axis as we breath. What we are seeing is the motion-induced artifact of that rotation, which is most severe along the back of the head and periphery of the brain. We see a similar result for the overall motion regressors, but flipped to the front:

Ignore the above, respiration regressor is not viable due to “clipping”, see note at top of post. Glad I warned everyone that this post was “in progress” :) Respiration should be a bit more global, restricted to ventricles and blood vessels.

motion

Wow, look at all the significant activity! Someone call up Nature and let them know, motion lights up the whole brain! As we would expect, the motion regressor explains a ton of uninteresting variance, particularly around the prefrontal cortex and periphery.

I still have a ways to go on this project- obviously this is just a single subject, and the results could vary wildly. But I do think even at this point we can start to see that it is quite easy and desirable to model these effects in your data (Note: we had some technical failure due to the respiration belt being a POS…) I should note that in SPM, these sources of “non-white” noise are typically modeled using an autoregressive (AR(1)) model, which is enabled in the default settings (we’ve turned it off here). However as there is evidence that this model performs poorly at faster TRs (which are the norm now), and that a noise-modelling approach can greatly improve SnR while removing artifacts, we are likely to get better performance out of a nuisance regression technique as demonstrated here [4]. The next step will be to take these regressors to a second level analysis, to examine if the meditation group has significantly more BOLD variance-explained by physiological noise than do controls. Afterwards, I will re-run the analysis without any physio parameters, to compare the results of both.

References:


1. Birn RM, Diamond JB, Smith MA, Bandettini PA.
Separating respiratory-variation-related fluctuations from neuronal-activity-related fluctuations in fMRI.
Neuroimage. 2006 Jul 15;31(4):1536-48. Epub 2006 Apr 24.

2. Brooks J.C.W., Beckmann C.F., Miller K.L. , Wise R.G., Porro C.A., Tracey I., Jenkinson M.
Physiological noise modelling for spinal functional magnetic resonance imaging studies
NeuroImage in press: DOI: doi: 10.1016/j.neuroimage.2007.09.018

3. Glover GH, Li TQ, Ress D.
Image-based method for retrospective correction of physiological motion effects in fMRI: RETROICOR.
Magn Reson Med. 2000 Jul;44(1):162-7.

4. Lund TE, Madsen KH, Sidaros K, Luo WL, Nichols TE.
Non-white noise in fMRI: does modelling have an impact?
Neuroimage. 2006 Jan 1;29(1):54-66.

5. Wise RG, Ide K, Poulin MJ, Tracey I.
Resting fluctuations in arterial carbon dioxide induce significant low frequency variations in BOLD signal.
Neuroimage. 2004 Apr;21(4):1652-64.

2. Brooks J.C.W., Beckmann C.F., Miller K.L. , Wise R.G., Porro C.A., Tracey I., Jenkinson M.
Physiological noise modelling for spinal functional magnetic resonance imaging studies
NeuroImage in press: DOI: doi: 10.1016/j.neuroimage.2007.09.018

7. Penny, W., Kiebel, S., & Friston, K. (2003). Variational Bayesian inference for fMRI time series. NeuroImage, 19(3), 727–741. doi:10.1016/S1053-8119(03)00071-5

New Meditation Study in Neuroimage: “Meditation training increases brain efficiency in an attention task”

Just a quick post to give my review of the latest addition to imaging and mindfulness research. A new article by Kozasa et al, slated to appear in Neuroimage, investigates the neural correlates of attention processing in a standard color-word stroop task. A quick overview of the article reveals it is all quite standard; two groups matched for age, gender, and years of education are administered a standard RT-based (i.e. speeded) fMRI paradigm. One group has an average of 9 years “meditation experience” which is described as “a variety of OM (open monitoring) or FA (focused attention) practices such as “zazen”, mantra meditation, mindfulness of breathing, among others”. We’ll delve into why this description should give us pause for thought in a moment, for now let’s look at the results.

Amplitude of bold responses in the lentiform nucleus, medial frontal gyrus, middle temporal gyrus and precentral gyrus during the incongruent and congruent conditions in meditators and non-meditators.

Results from incon > con, non-meditators vs meditators

In a nutshell, the authors find that meditation-practitioners show faster reaction times with reduced BOLD-signal for the incongruent (compared to congruent and neutral) condition only. The regions found to be more active for non-meditators compared to meditators are the (right) “lentiform nucleus, medial frontal gyrus, and pre-central gyrus” . As this is not accompanied by any difference in accuracy, the authors interpret the finding as demonstrating  that “meditators may have maintained the focus in naming the colour with less interference of reading the word and consequently have to exert less effort to monitor the conflict and less adjustment in the motor control of the impulses to choose the correct colour button.” The authors in the conclusion review related findings and mention that differences in age could have contributed to the effect.

So, what are we to make of these findings? As is my usual style, I’ll give a bulleted review of the problems that immediately stand out, and then some explanation afterwards. I’ll preface my critique by thanking the authors for their hard work; my comments are intended only for the good of our research community.

The good:

  • Sensible findings; increases in reaction time and decreases in bold are demonstrated in areas previously implicated in meditation research
  • Solid, easy to understand behavioral paradigm
  • Relatively strong main findings ( P< .0001)
  • A simple replication. We like replications!
The bad:
  • Appears to report uncorrected p-values
  • Study claims to “match samples for age” yet no statistical test demonstrating no difference is shown. Qualitatively, the ages seem different enough to be cause for worry (77.8% vs 65% college graduates). Always be suspicious when a test is not given!
  • Extremely sparse description of style of practice, no estimate of daily practice hours given.
  • Reaction-time based task with no active control

I’ll preface my conclusion with something Sara Lazar, a meditation researcher and neuroimaging expert at the Harvard MGH told me last summer; we need to stop going for the “low hanging fruit of meditation research”. There are now over 20 published cross-sectional reaction-time based fMRI studies of “meditators” and “non-meditators”. Compare that to the incredibly sparse number of longitudinal, active controlled studies, and it is clear that we need to stop replicating these findings and start determining what they actually tell us. Why do we need to active control our meditation studies? For one thing, we know that reaction-time based tests are heavily based by the amount of effort one expends on the task. Effort is in turn influenced by task-demands (e.g. how you treat your participants, expectations surrounding the experiment). To give one in-press example, my colleagues Christian Gaden Jensen at the Copenhagen Neurobiology Research recently conducted a study demonstrating just how strong this confounding effect can be.

To briefly summarize, Christian recruited over 150 people for randomization to four experimental groups: mindfulness-based stress reduction (MBSR), non-mindfulness stress reduction (NMSR), wait-listed controls, and financially-motivated wait-listed controls. This last group is the truly interesting one; they were told that if they had top performance on the experimental tasks (a battery of classical reaction-time based and unspeeded perceptual threshold tasks) they’d receive a reward of approximately 100$. When Christian analyzed the data, he found that the financial incentive eliminated all reaction-time based differences between the MBSR, NMSR, and financially motivated groups! It’s important to note that this study, fully randomized and longitudinal, showed something not reflected in the bulk of published studies: that meditation may actually train more basic perceptual sensitivities rather than top-down control. This is exactly why we need to stop pursuing the low-hanging fruit of uncontrolled experimental design; it’s not telling us anything new! Meditation research is no longer exploratory.

In addition to these issues, there is another issue a bit more specific to meditation research. That is the totally sparse description of the practice- less than one sentence total, with no quantitative data! In this study we are not even told what the daily practice actually consists of, or its quality or length. These practitioners report an average of 8 years practice, yet that could be 1 hour per week of mantra meditation or 12 hours a week of non-dual zazen! These are not identical processes and our lack of knowledge for this sample severely limits our ability to assess the meaning of  these findings. For the past two years (and probably longer) of the Mind & Life Summer Research Institute, Richard Davidson and others have repeatedly stated that we must move beyond studying meditation as “a loose practice of FA and OM practices including x, y, z, & and other things”. Willoughby Britton suggested at a panel discussion that all meditation papers need to have at least one contemplative scholar on them or risk rejection. It’s clear that this study was most likely not reviewed by anyone with any serious academic background in meditation research.

My supervisor Antoine Lutz and his colleague John Dunne, authors of the paper that launched the “FA/OM” distinction, have since stated emphatically that we must go beyond these general labels and start investigating effects of specific meditation practices. To quote John, we need to stop treating meditation like a “black box” if we ever want to understand the actual mechanisms behind it. While I thank the authors of this paper for their earnest contribution, we need to take this moment to be seriously skeptical. We can only start to understand processes like meditation from a scientific point of view if we are willing to hold them to the highest of scientific standards. It’s time for us to start opening the black box and looking inside.

Switching between executive and default mode networks in posttraumatic stress disorder [excerpts and notes]

From: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2895156/?tool=pubmed

Daniels et al, 2010

We decided to use global scaling because we were not analyzing anticorrelations in this paradigm and because data presented by Fox and colleagues66 and Weissenbacher and coworkers65 indicate that global scaling enhances the detection of system-specific correlations and doubles connection specificity. Weissenbacher and colleagues65 compared different preprocessing approaches in human and simulated data sets and recommend applying global scaling to maximize the specificity of positive resting-state correlations. We used high-pass filtering with a cut-off at 128 seconds to minimize the impact of serial autocorrelations in the fMRI time series that can result from scanner drift.

Very useful methodological clipping!

The control condition was a simple fixation task, requiring attention either to the response instruction or to a line of 5 asterisks in the centre of the screen. We chose this control task to resemble the activation task as closely as possible; it therefore differed considerably from previous resting state analyses because it was relatively short in duration and thus necessitated fast switches between the control condition and the activation task. It also prompted the participants to keep their eyes open and fixated on the stimulus, which has been shown to result in stronger default mode network activations than the closed-eyes condition.60

Good to remember: closed-eyed resting states result in weaker default mode activity.

To ensure frequent switching between an idling state and task-induced activation, we used a block design, presenting the activation task (8 volumes) twice interspersed with the fixation task (4 volumes) within each of 16 imaging runs. Each task was preceded by an instruction block (4 volumes duration), amounting to a total acquisition of 512 volumes per participant. The order of the working memory tasks was counterbalanced between runs and across participants. Full details of this working memory paradigm are provided in the study by Moores and colleagues.6 There were 2 variations of this task in each run concerning the elicited button press response; however, because we were interested in the effects of cognitive effort on default network connectivity, rather than specific effects associated with a particular variation of the task, we combined the response variations to model a single “task” condition for this study. The control condition consisted of periods of viewing either 5 asterisks in the centre of the screen or a notice of which variation of the task would be performed next.

Psychophysiological interaction analyses are designed to measure context-sensitive changes in effective connectivity between one or more brain regions67 by comparing connectivity in one context (in the current study, a working memory updating task) with connectivity during another context (in this case, a fixation condition). We used seed regions in the mPFC and PCC because both these nodes of the default mode network act independently across different cognitive tasks, might subserve different subsystems within the default mode network and have both been associated with alterations in PTSD.8

This paradigm is very interesting. The authors have basically administered a battery of working memory tasks with interspersed rest periods, and carried out ROI inter-correlation, or seed analysis. Using this simple approach, a wide variety of experimenters could investigate task-rest interactions using their existing data sets.

Limitations

The limitations of our results predominantly relate to the PTSD sample studied. To investigate the long-lasting symptoms that accompany a significant reduction of the general level of functioning, we studied alterations in severe, chronic PTSD, which did not allow us to exclude patients taking medications. In addition, the small sample size might have limited the power of our analyses. To avoid multiple testing in a small sample, we only used 2 seed regions for our analyses. Future studies should add a resting state scan without any visual input to allow for comparison of default mode network connectivity during the short control condition and a longer resting state.

The different patterns of connectivity imply significant group differences with task-induced switches (i.e., engaging and disengaging the default mode network and the central-executive network).
Follow

Get every new post delivered to your Inbox.

Join 12,540 other followers

%d bloggers like this: