Correcting your naughty insula: modelling respiration, pulse, and motion artifacts in fMRI

important update: Thanks to commenter “DS”, I discovered that my respiration-related data was strongly contaminated due to mechanical error. The belt we used is very susceptible to becoming uncalibrated, if the subject moves or breathes very deeply for example. When looking at the raw timecourse of respiration I could see that many subjects, included the one displayed here, show a great deal of “clipping” in the timeseries. For the final analysis I will not use the respiration regressors, but rather just the pulse and motion. Thanks DS!

As I’m working my way through my latest fMRI analysis, I thought it might be fun to share a little bit of that here. Right now i’m coding up a batch pipeline for data from my Varela-award project, in which we compared “adept” meditation practitioners with motivation, IQ, age, and gender-matched controls on a response-inhibition and error monitoring task. One thing that came up in the project proposal meeting was a worry that, since meditation practitioners spend so much time working with the breath, they might respirate differently either at rest or during the task. As I’ve written about before, respiration and other related physiological variables such as cardiac-pulsation induced motion can seriously impact your fMRI results (when your heart beats, the veins in your brain pulsate, creating slight but consistent and troublesome MR artifacts). As you might expect, these artifacts tend to be worse around the main draining veins of the brain, several of which cluster around the frontoinsular and medial-prefrontal/anterior cingulate cortices. As these regions are important for response-inhibition and are frequently reported in the meditation literature (without physiological controls), we wanted to try to control for these variables in our study.

disclaimer: i’m still learning about noise modelling, so apologies if I mess up the theory/explanation of the techniques used! I’ve left things a bit vague for that reason. See bottom of article for references for further reading. To encourage myself to post more of these “open-lab notes” posts, I’ve kept the style here very informal, so apologies for typos or snafus. :D

To measure these signals, we used the respiration belt and pulse monitor that come standard with most modern MRI machines. The belt is just a little elastic hose that you strap around the chest wall of the subject, where it can record expansions and contractions of the chest to give a time series corresponding to respiration, and the pulse monitor a standard finger clip. Although I am not an expert on physiological noise modelling, I will do my best to explain the basic effects you want to model out of your data. These “non-white” noise signals include pulsation and respiration-induced motion (when you breath, you tend to nod your head just slightly along the z-axis), typical motion artifacts, and variability of pulsation and respiration. To do this I fed my physiological parameters into an in-house function written by Torben Lund, which incorporates a RETROICOR transformation of the pulsation and respiration timeseries. We don’t just use the raw timeseries due to signal aliasing- the phsyio data needs to be shifted to make each physiological event correspond to a TR. The function also calculates the respiratory volume time delay (RVT), a measure developed by Rasmus Birn, to model the variability in physiological parameters1. Variability in respiration and pulse volume (if one group of subjects tend to inhale sharply for some conditions but not others, for example) is more likely to drive BOLD artifacts than absolute respiratory volume or frequency (if one group of subjects tend to inhale sharply for some conditions but not others, for example). Finally, as is standard, I included the realignment parameters to model subject motion-related artifacts. Here is a shot of my monster design matrix for one subject:


You can see that the first 7 columns model my conditions (correct stops, unaware errors, aware errors, false alarms, and some self-report ratings), the next 20 model the RETROICOR transformed pulse and respiration timeseries, 41 columns for RVT, 6 for realignment pars, and finally my session offsets and constant. It’s a big DM, but since we have over 1000 degrees of freedom, i’m not too worried about all the extra regressors in terms of loss of power. What would be worrisome is if for example stop activity correlated strongly with any of the nuisance variables –  we can see from the orthogonality plot that in this subject at least, that is not the case. Now lets see if we actually have anything interesting left over after we remove all that noise:

stop SPM

We can see that the Stop-related activity seems pretty reasonable, clustering around the motor and premotor cortex, bilateral insula, and DLPFC, all canonical motor inhibition regions (FWE-cluster corrected p = 0.05). This is a good sign! Now what about all those physiological regressors? Are they doing anything of value, or just sucking up our power? Here is the f-contrast over the pulse regressors:


Here we can see that the peak signal is wrapped right around the pons/upper brainstem. This makes a lot of sense- the area is full of the primary vasculature that ferries blood into and out of the brain. If I was particularly interested in getting signal from the brainstem in this project, I could use a respiration x pulse interaction regressor to better model this6. Penny et al find similar results to our cardiac F-test when comparing AR(1) with higher order AR models [7]. But since we’re really only interested in higher cortical areas, the pulse regressor should be sufficient. We can also see quite a bit of variance explained around the bilateral insula and rostral anterior cingulate. Interestingly, our stop-related activity still contained plenty of significant insula response, so we can feel better that some but not all of the signal from that region is actually functionally relevant. What about respiration?


Here we see a ton of variance explained around the occipital lobe. This makes good sense- we tend to just slightly nod our head back and forth along the z-axis as we breath. What we are seeing is the motion-induced artifact of that rotation, which is most severe along the back of the head and periphery of the brain. We see a similar result for the overall motion regressors, but flipped to the front:

Ignore the above, respiration regressor is not viable due to “clipping”, see note at top of post. Glad I warned everyone that this post was “in progress” :) Respiration should be a bit more global, restricted to ventricles and blood vessels.


Wow, look at all the significant activity! Someone call up Nature and let them know, motion lights up the whole brain! As we would expect, the motion regressor explains a ton of uninteresting variance, particularly around the prefrontal cortex and periphery.

I still have a ways to go on this project- obviously this is just a single subject, and the results could vary wildly. But I do think even at this point we can start to see that it is quite easy and desirable to model these effects in your data (Note: we had some technical failure due to the respiration belt being a POS…) I should note that in SPM, these sources of “non-white” noise are typically modeled using an autoregressive (AR(1)) model, which is enabled in the default settings (we’ve turned it off here). However as there is evidence that this model performs poorly at faster TRs (which are the norm now), and that a noise-modelling approach can greatly improve SnR while removing artifacts, we are likely to get better performance out of a nuisance regression technique as demonstrated here [4]. The next step will be to take these regressors to a second level analysis, to examine if the meditation group has significantly more BOLD variance-explained by physiological noise than do controls. Afterwards, I will re-run the analysis without any physio parameters, to compare the results of both.


1. Birn RM, Diamond JB, Smith MA, Bandettini PA.
Separating respiratory-variation-related fluctuations from neuronal-activity-related fluctuations in fMRI.
Neuroimage. 2006 Jul 15;31(4):1536-48. Epub 2006 Apr 24.

2. Brooks J.C.W., Beckmann C.F., Miller K.L. , Wise R.G., Porro C.A., Tracey I., Jenkinson M.
Physiological noise modelling for spinal functional magnetic resonance imaging studies
NeuroImage in press: DOI: doi: 10.1016/j.neuroimage.2007.09.018

3. Glover GH, Li TQ, Ress D.
Image-based method for retrospective correction of physiological motion effects in fMRI: RETROICOR.
Magn Reson Med. 2000 Jul;44(1):162-7.

4. Lund TE, Madsen KH, Sidaros K, Luo WL, Nichols TE.
Non-white noise in fMRI: does modelling have an impact?
Neuroimage. 2006 Jan 1;29(1):54-66.

5. Wise RG, Ide K, Poulin MJ, Tracey I.
Resting fluctuations in arterial carbon dioxide induce significant low frequency variations in BOLD signal.
Neuroimage. 2004 Apr;21(4):1652-64.

2. Brooks J.C.W., Beckmann C.F., Miller K.L. , Wise R.G., Porro C.A., Tracey I., Jenkinson M.
Physiological noise modelling for spinal functional magnetic resonance imaging studies
NeuroImage in press: DOI: doi: 10.1016/j.neuroimage.2007.09.018

7. Penny, W., Kiebel, S., & Friston, K. (2003). Variational Bayesian inference for fMRI time series. NeuroImage, 19(3), 727–741. doi:10.1016/S1053-8119(03)00071-5

41 thoughts on “Correcting your naughty insula: modelling respiration, pulse, and motion artifacts in fMRI

  1. Nice post Micah. Just to play devil’s advocate (not really) Might it be worthwhile comparing higher order AR process noise or the sum of higher AR order process noises to the modelling approach? I know Torben has already done something very similar in the 2006 with AR(1) whitening but 1. It would be nice to see the difference, 2. If the difference was interesting enough there could be a paper in it (unless this has been done before), 3. My feeling is that the sum of higher AR noises should be quite a bit better at modelling these effects especially if you can include a decent range of coefficient values.

    • Yes this could be interesting. I’m not familiar with the AR methods beyond a topical understanding, but I understand that the AR(1) model is extremely old and that higher order correlations can now be modeled.

  2. Micah

    What sort of “preprocessing did you do”: Motion correction? Temporal interpolate all the slice data to a single time point in each TR? Etc.

    • Motion correction (re-alignment). I did not perform slice-time correction as it’s a bit controversial whether it works properly. But RETROICOR calculates a respiratory phase per volume, so I don’t think slice-time is needed here with such a short TR (2s).

      • Was the figure appearing just above the sentence “Wow, look at all the significant activity! ” derived from data which underwent realignment?

        • Yes, all the figures are from the same preprocessing and analysis pipeline. That figure in particular is the F-contrast over the 6 re-alignment parameters.

    • DS,

      I want to say thanks for your insight- I talked about it with Torben and he agreed that the respiration signal did not really look correct. It should be more global, focused around blood vessels and the ventricles. I went through and plotted the respiration time course for each subject, and wouldn’t you know that for this particular subject the timecourse showed clear “clipping”. Sadly many of my subjects show this problem, so we will not be able to use the respiration regressors. The pulse should be fine, so at least there is that. These belts are very finicky it seems and if the subject takes a deep breath, it can reset all the measurement. In the future we will use a more advanced belt. You were right on the money, thanks for your helpful tip!


  3. Hi all

    If respiration is causing its significant effects through motion then why do a separate motion (due to all sources) regression?

    Do you you believe that the time series of motion parameters that you get from the realignment routine are sufficiently accurate to warrant their use as a regressor?

    • Micah did the separate analysis to only look at the motion parameters just for demonstration purposes. It can be a useful diagnostic just to see what kind of artefacts you have.

      The 6 motion parameters (and often their derivatives) that the realignment procedure in SPM produce are sufficient for most purposes. In other words they are accurate enough for most cases but there are more accurate methods. Most of those methods involve collecting positional data separately by some other independent means. The benefit of the derived motion parameters is that you do not need to do anything more than just collect your EPI images.

      • Why do you think are “accurate enough”? All the literature that I have read gives me no reason to think that the image-based realignment routines have any accuracy any better than the physical resolution of the image. I don’t see how it could be otherwise.

  4. Also Micah stated the following in this blog post:

    “You can see that the first 7 columns model my conditions (correct stops, unaware errors, aware errors, false alarms, and some self-report ratings), the next 20 model the RETROICOR transformed pulse and respiration timeseries, 41 columns for RVT, 6 for realignment pars,…”

    So it looks to me like he is not looking at motion regression due to respiration alone and then that due to all sources of motion for “demonstration purposes” alone. It looks like both are incorporated in his actual analysis. Am I wrong, Micah?

    • Hi,

      If I understand you- here I am modelling the six motion parameters, as well as separate respiration and pulse phase regressors and RVT. It is true that the respiration and pulse regressors would explain some motion (thats what they are there for) but they should pick up slightly different variance than just using the realignment pars.

      Hope that clarifies.

      • I would expect motion effects that are correlated with respiration, but not due to coupled rigid body motion of the head, to be less spatially localized than that seen in your figure that appears above the sentence “Here we see a ton of variance explained around the occipital lobe.”

        Respiration can potentially modulate the main B field and lead to what I would expect to be more spatially global effects than those seen in the above mentioned figure.

        • Ah, OK now this is more clear to me. Couldn’t this be because we include respiration phase, RVT, and motion regressors? So the more globalized effects are being caught by the RVT and motion regressors? I guess motion is last, so respiration should carry principle component for both and motion should be less. Does it matter that the resp regressor is phase?

      • Micah

        But it does not look like the RVT regressor is drawing out global signal changes. They look localized compared to what I would expect.

        • Hey DS,

          First, thanks for all your comments, I really appreciate the interest in my work. Please excuse me if I can’t answer all your questions adequately- I primarily work with cognitive control and training, and this noise regression stuff is just something I am learning from Torben.

          That being said- is it worth noting that these are FWE corrected subject level results- so all of the maps look a good bit more global at an uncorrected p-value. I sent this post to Torben and he said it looked pretty good. I’m not saying you are wrong- it certainly merits further looking into.

        • Also, I should add that I’m not really sure if it is a good idea to include resp, pulse, RVT, and motion all in one model. Torben’s original suggestion was just resp, pulse, and motion. I am pretty sure that the rigid body motion and resp model similar but not entirely co-linear variance. We added the RVT regressor as I was worried about one group being more variable in their inhalation then the other. I am actually re-running everything today and would be open to your suggestions for what to include or not. Finally I elected only to model the 6 re-alignment parameters and not the motion history effects also, which I assume are very colinear with respiration.


    • About it being accurate enough there are three things: 1. The typical extent of movement in the scanner 2.The voxel sizes used 3. The relative contribution of movement to spurious activation.

      In regards to point 1 there are quite a few papers now that use non image based correction techniques, ie EEG, eye trackers, infra-red based head tracking.

      In regards to point 2. For an average study in a 3T scanner the voxel sizes would usually be around 3 mm isotropic, perhaps smaller, perhaps bigger depending on your desired coverage and TR. At first glance 3 mm might seem to the maximum resolution you can achieve but there are ways around this. By exploiting multiple images you can obtain resolution greater than your voxel size. Freesurfer is able to exploit this when it does full brain image segentaiton. The more T1 scans you can feed it the better (down to some lower limit).

      In regards to point 3 exactly how much movement contributes to spurious activation is going to depend quite heavily on your paradigm and to a lesser extent on the individual brain and how it fits in the head coil.

      All three sets of regressors, what used to be called “nuissance variables” are included. The RETROICOR, the respiration and the 3 shear and 3 rotation motion parameters. They are all included to help soak up variance that would otherwise go unexplained. The SPM he created ie a contrast that only included those regressors in his design matrix was to show what the motion looks like in terms of BOLD-like activity. As Micah says, if in doubt model it.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s