Neuroconscience

The latest thoughts, musings, and data in cognitive science and neuroscience.

Top 200 terms in cognitive neuroscience according to neurosynth

Tonight I was playing around with some of the top features in neurosynth (the searchable terms with the highest number of studies containing that term). You can find the list here, just sort by the number of studies. I excluded the top 3 terms which are boring (e.g. “image”, “response”, and “time”)  and whose extremely high weights would mess up the wordle. I then created a word-cloud weighted so that the size reflects the number of studies for each term.

Here are the top 200 terms sized according to number times reported in neurosynth’s 5809 indexed fMRI studies:

wordle

Pretty neat! These are the 200 terms the neurosynth database has the most information on, and is a pretty good overview of key concepts and topics in our field! I am sure there is something useful for everyone in there :D

Direct link to the wordle:

Wordle: neurosynth

Neurovault: a must-use tool for every neuroimaging paper!

Something that has long irked me about cognitive neuroscience is the way we share our data. I still remember the very first time I opened a brain imaging paper and was struck dumbfounded by the practice of listing activation results in endless p-value tables and selective 2D snapshots. How could anyone make sense of data this way? Now having several years experience creating such papers, I am only more dumbfounded that we continue to present our data in this way. What purpose can be served by taking a beautiful 3-dimensional result and filtering it through an awkward foci ‘photoshoot’? While there are some standards you can use to improve the 2D presentation of 3D brain maps, for example showing only peak activation and including glass-brains, this is an imperfect solution – ultimately the best way to assess the topology of a result is by directly examining the full 3D result.

Just imagine how improved every fMRI paper would be, if instead of a 20+ row table and selective snapshot, results were displayed in a simple 3D viewing widget right in the paper? Readers could assess the underlying effects at whatever statistical threshold they feel is most appropriate, and PDF versions could be printed at a particular coordinate and threshold specified by the author. Reviewers and readers alike could get a much fuller idea of the data, and meta-analysis would be vastly improved by the extensive uploading of well-categorized contrast images. More-over, all this can be easily achieved while excluding worries about privacy and intellectual property, using only group-level contrast images, which are inherently without identifying features and contain only those effects included in the published manuscript!

Now imagine my surprise when I learned that thanks to Chris Gorgolewksi and colleagues, all of this is already possible! Chris pioneered the development of neurovault.org, an extremely easy to use data sharing site backed by the International Neuroinformatics Coordinating Facility. To use it, researchers simply need to create a new ‘collection’ for their study and then upload whatever images they like. Within about 15 minutes I was able to upload both the T- and contrast-images from my group level analysis, complete with as little or as much meta-data as I felt like including. Collections can be easily linked to paper DOIs and marked as in-review, published, etc. Collections and entries can be edited or added to at any time, and the facilities allow quick documentation of imaging data at any desired level, from entire raw imaging datasets to condition-specific group contrast images. Better still, neurovault seamlessly displays these images on a 3D MNI standard brain with flexible options for thresholding, and through a hookup to neurosynth.org can even seamlessly find meta-analytic feature loadings for your images! Check out these t-map display and feature loadings for the stimulus intensity contrast for my upcoming somatosensory oddball paper, which correctly identified the modality of stimulation!

T-map in the neurovault viewer.

T-map in the neurovault viewer.

Decoded features for my contrast image.

Decoded features for my contrast image, with accurate detection of stimulation modality!

Neurovault.org doesn’t yet support embedding the viewer, but it is easy to imagine that with collaboration from publishers, future versions could be embedded directly within HTML full-text for imaging papers. For now, the site provides the perfect solution for researchers looking to make their data available to others and to more fully present their results, simply by providing supplementary links either to the neurovault collection or directly to individual viewer results. This is a tool that everyone in cognitive neuroscience should be using – I fully intend to do so in all future papers!

Is there a ‘basement’ for quirky psychological research?

Beware the Basement!

Beware the Basement!

 One thing I will never forget from my undergraduate training in psychology was the first lecture of my personality theory class. The professor started the lecture by informing us that he was quite sure that of the 200+ students in the lecture hall, the majority of us were probably majoring in psychology because we thought it would be neat to study sex, consciousness, psychedelics, paranormal experience, meditation, or the ilk. He then informed us this was a trap that befell almost all new psychology students, as we were all drawn to the study of the mind by the same siren call of the weird and wonderful human psyche. However he warned, we should be very, very careful not to reveal these suppressed interests until we were well established (I’m assuming he meant tenured) researchers- otherwise we’d risk being thrown into the infamous ‘basement of psychology’, never to be heard from again.

This colorful lecture really stuck with me through the years; I still jokingly refer to the basement whenever a more quirky research topic comes up. Of course I did a pretty poor job of following this advice, seeing as my first project as a PhD student involved meditation, but nonetheless I have repressed an academic interest in more risque topics throughout my career. And i’m not really actively avoiding them for fear of being placed in the basement – i’m more just following my own pragmatic research interests, and waiting for some day when I have more time and freedom to follow ideas that don’t directly tie into the core research line I’m developing.

But still. That basement. Does it really exist? In a world where papers about having full bladders renders us more politically conservative can make it into prestigious journals, or where scientists scan people having sex inside a scanner just to see what happens, or where psychologists seriously debate the possibility of precognition – can anything really be taboo? Or can we still distinguish from these flightier topics a more serious avenue of research? And what should be said about those who choose such topics?

Personally I think the idea of a ‘basement’ is largely a hold-over from the heyday of behaviorism, when psychologists were seriously concerned about positioning psychology as a hard science. Cognitivism has given rise to an endless bevy of serious topics that would have once been taboo; consciousness, embodiment, and emotion to name a few. Still, in the always-snarky twittersphere, one can’t but help feel that there is still a certain amount of nose thumbing at certain topics.

I think really, in the end, it’s not the topic so much as the method. Chris Frith once told me something to the tune of ‘in [cognitive neuroscience] all the truly interesting phenomenon are beyond proper study’. We know the limitations of brain scans and reaction times, and so tend to cringe a bit when someone tries to trot out the latest silly-super-human special interest infotainment paper.

What do you think? Is there a ‘basement’ for silly research? And if so, what defines what sorts of topics should inhabit its dank confines?

We the Kardashians are Democratizing Science

I had a good laugh this weekend at a paper published to Genome Biology. Neil Hall, the author of the paper and well-established Liverpool biologist, writes that in the brave new era of social media, there “is a danger that this form of communication is gaining too high a value and that we are losing sight of key metrics of scientific value, such as citation indices.” Wow, what a punchline! According to Neil, we’re in danger of forgetting that tweets and blogposts are, according to him, the worthless gossip of academia. After all, who reads Nature and Science these days?? I know so many colleagues getting big grants and tenure track jobs just over their tweets! Never mind that Neil himself has about 11 papers published in Nature journals – or perhaps we are left to sympathize with the poor, untweeted author? Outside of bitter sarcasm, the article is a fun bit of satire, and I’d like to think charitably that it was aimed not only at ‘altmetrics’, but at the metric enterprise in general. Still, I agree totally with Kathryn Clancy that the joke fails insofar as it seems to be ‘punching down’ at those of us with less established CVs than Neil, who take to social media in order to network and advance our own fledgling research profiles. I think it also belies a critical misapprehension of how social media fits into the research ecosystem common among established scholars. This sentiment is expressed rather precisely by Neil when discussing his Kardashian index:

The Kardashian Index

The Kardashian Index

“In an age dominated by the cult of celebrity we, as scientists, need to protect ourselves from mindlessly lauding shallow popularity and take an informed and critical view of the value we place on the opinion of our peers. Social media makes it very easy for people to build a seemingly impressive persona by essentially ‘shouting louder’ than others. Having an opinion on something does not make one an expert.”

So there you have it. Twitter equals shallow popularity. Never mind the endless possibilities of having seamless networked interactions with peers from around the world. Never mind sharing the latest results, discussing them, and branching these interactions into blog posts that themselves evolve into papers. Forget entirely that without this infosphere of interaction, we’d be left totally at the whims of Impact Factor to find interesting papers among the thousands published daily. What it’s really all about is building a “seemingly impressive persona” by “shouting louder than others”. What then does constitute effective scientific output, Neil? The answer it seems – more high impact papers:

“I propose that all scientists calculate their own K-index on an annual basis and include it in their Twitter profile. Not only does this help others decide how much weight they should give to someone’s 140 character wisdom, it can also be an incentive – if your K-index gets above 5, then it’s time to get off Twitter and write those papers.”

Well then, I’m glad we covered that. I’m sure there were many scientists or scholars out there who amid the endless cycle of insane job pressure, publish or perish horse-racing, and blood feuding for grants thought, ‘gee I’d better just stop this publishing thing entirely and tweet instead’. And likewise, I’m sure every young scientist looks at ‘Kardashians’ and thinks, ‘hey I’d better suspend all critical thinking, forget all my training, and believe everything this person says’. I hope you can feel me rolling my eyes.  Seriously though – this represents a fundamental and common misunderstanding of the point of all this faffing about on the internet. Followers, impact, and notoriety are all poorly understood side-effects of this process; they are neither the means nor goal. And never mind those less concrete (and misleading) contributions like freely shared code, data, or thoughts – the point here is to blather and gossip!

While a (sorta) funny joke, it is this point that is done the most disservice by Neil’s article. We (the Kardashians) are democratizing science. We are filtering the literally unending deluge of papers to try and find the most outrageous, the most interesting, and the most forgotten, so that they can see the light of day beyond wherever they were published and forgotten. We seek these papers to generate discussion and to garner attention where it is needed most. We are the academy’s newest, first line of defense, contextualizing results when the media runs wild with them. We tweet often because there is a lot to tweet, and we gain followers because the things we tweet are interesting. And we do all of this without the comfort of a lofty CV or high impact track record, with little concrete assurance that it will even benefit us, all while still trying to produce the standard signs of success. And it may not seem like it now – but in time it will be clear that what we do is just as much a part of the scientific process as those lofty Nature papers. Are we perfect? No. Do we sometimes fall victim to sensationalism or crowd mentality? Of course – we are only fallible human beings, trying to find and create utility within a new frontier. We may not be the filter science deserves – but we are the one it needs. Wear your Kardshian index with pride.

oh BOLD where art thou? Evidence for a “mm-scale” match between intracortical and fMRI measures.

A frequently discussed problem with functional magnetic resonance imaging is that we don’t really understand how the hemodynamic ‘activations’ measured by the technique relate to actual neuronal phenomenon. This is because fMRI measures the Blood-Oxygenation-Level Dependent (BOLD) signal, a complex vascular response to neuronal activity. As such, neuroscientists can easily get worried about all sorts of non-neural contributions to the BOLD signal, such as subjects gasping for air, pulse-related motion artefacts, and other generally uninteresting effects. We can even start to worry that out in the lab, the BOLD signal may not actually measure any particular aspect of neuronal activity, but rather some overly diluted, spatially unconstrained filter that simply lacks the key information for understanding brain processes.

Given that we generally use fMRI over neurophysiological methods (e.g. M/EEG) when we want to say something about the precise spatial generators of a cognitive process, addressing these ambiguities is of utmost importance. Accordingly a variety of recent papers have utilized multi-modal techniques, for example combining optogenetics, direct recordings, and FMRI, to assess particularly which kinds of neural events contribute to alterations in the BOLD signal and it’s spatial (mis)localization. Now a paper published today in Neuroimage addresses this question by combining high resolution 7-tesla fMRI with Electrocorticography (ECoG) to determine the spatial overlap of finger-specific somatomotor representations captured by the measures. Starting from the title’s claim that “BOLD matches neuronal activity at the mm-scale”, we can already be sure this paper will generate a great deal of interest.

From Siero et al (In Press)

As shown above, the authors managed to record high resolution (1.5mm) fMRI in 2 subjects implanted with 23 x 11mm intracranial electrode arrays during a simple finger-tapping task. Motor responses from each finger were recorded and used to generate somatotopic maps of brain responses specific to each finger. This analysis was repeated in both ECoG and fMRI, which were then spatially co-registered to one another so the authors could directly compare the spatial overlap between the two methods. What they found appears at first glance, to be quite impressive:
From Siero et al (In Press)

Here you can see the color-coded t-maps for the BOLD activations to each finger (top panel, A), the differential contrast contour maps for the ECOG (middle panel, B), and the maximum activation foci for both measures with respect to the electrode grid (bottom panel, C), in two individual subjects. Comparing the spatial maps for both the index and thumb suggests a rather strong consistency both in terms of the topology of each effect and the location of their foci. Interestingly the little finger measurements seem somewhat more displaced, although similar topographic features can be seen in both. Siero and colleagues further compute the spatial correlation (Spearman’s R) across measures for each individual finger, finding an average correlation of .54, with a range between .31-.81, a moderately high degree of overlap between the measures. Finally the optimal amount of shift needed to reduce spatial difference between the measures was computed and found to be between 1-3.1 millimetres, suggesting a slight systematic bias between ECoG and fMRI foci.

Are ‘We the BOLD’ ready to breakout the champagne and get back to scanning in comfort, spatial anxieties at ease? While this is certainly a promising result, suggesting that the BOLD signal indeed captures functionally relevant neuronal parameters with reasonable spatial accuracy, it should be noted that the result is based on a very-best-case scenario, and that a considerable degree of unique spatial variance remains for the two methods. The data presented by Siero and colleagues have undergone a number of crucial pre-processing steps that are likely to influence their results: the high degree of spatial resolution, the manual removal of draining veins, the restriction of their analysis to grey-matter voxels only, and the lack of spatial smoothing all render generalizing from these results to the standard 3-tesla whole brain pipeline difficult. Indeed, even under these best-case criteria, the results still indicate up to 3mm of systematic bias in the fMRI results. Though we can be glad the bias was systematic and not random– 3mm is still quite a lot in the brain. On this point, the authors note that the stability of the bias may point towards a systematic miss-registration of the ECoG and FMRI data and/or possible rigid-body deformations introduced by the implantation of the electrodes), issues that could be addressed in future studies. Ultimately it remains to be seen whether similar reliability can be obtained for less robust paradigms than finger wagging, obtained in the standard sub-optimal imaging scenarios. But for now I’m happy to let fMRI have its day in the sun, give or take a few millimeters.

Siero, J. C. W., Hermes, D., Hoogduin, H., Luijten, P. R., Ramsey, N. F., & Petridou, N. (2014). BOLD matches neuronal activity at the mm scale: A combined 7T fMRI and ECoG study in human sensorimotor cortex. NeuroImage. doi:10.1016/j.neuroimage.2014.07.002

 

#MethodsWeDontReport – brief thought on Jason Mitchell versus the replicators

This morning Jason Mitchell self-published an interesting essay espousing his views on why replication attempts are essentially worthless. At first I was merely interested by the fact that what would obviously become a topic of heated debate was self-published, rather than going through the long slog of a traditional academic medium. Score one for self publication, I suppose. Jason’s argument is essentially that null results don’t yield anything of value and that we should be improving the way science is conducted and reported rather than publicising our nulls. I found particularly interesting his short example list of things that he sees as critical to experimental results which nevertheless go unreported:

These experimental events, and countless more like them, go unreported in our method section for the simple fact that they are part of the shared, tacit know-how of competent researchers in my field; we also fail to report that the experimenters wore clothes and refrained from smoking throughout the session.  Someone without full possession of such know-how—perhaps because he is globally incompetent, or new to science, or even just new to neuroimaging specifically—could well be expected to bungle one or more of these important, yet unstated, experimental details.

While I don’t agree with the overall logic or conclusion of Jason’s argument (I particularly like Chris Said’s Bayesian response), I do think it raises some important or at least interesting points for discussion. For example, I agree that there is loads of potentially important stuff that goes on in the lab, particularly with human subjects and large scanners, that isn’t reported. I’m not sure to what extent that stuff can or should be reported, and I think that’s one of the interesting and under-examined topics in the larger debate. I tend to lean towards the stance that we should report just about anything we can – but of course publication pressures and tacit norms means most of it won’t be published. And probably at least some of it doesn’t need to be? But which things exactly? And how do we go about reporting stuff like how we respond to random participant questions regarding our hypothesis?

To find out, I’d love to see a list of things you can’t or don’t regularly report using the #methodswedontreport hashtag. Quite a few are starting to show up- most are funny or outright snarky (as seems to be the general mood of the response to Jason’s post), but I think a few are pretty common lab occurrences and are even though provoking in terms of their potentially serious experimental side-effects. Surely we don’t want to report all of these ‘tacit’ skills in our burgeoning method sections; the question is which ones need to be reported, and why are they important in the first place?

The return of neuroconscience

Hello everyone! After an amazing visit back home to Tampa Florida for VSS and a little R&R in Denmark i’m back and feeling better than ever. Some of you may have noticed that i’ve been on an almost 6 month blogging hiatus. I’m just going to come right out and admit that after moving from Denmark to London, I really wasn’t sure what direction I wanted to take my blog. Changing institutions is always a bit of a bewildering experience, and a wise friend once advised me that it’s sometimes best to quietly observe new surroundings before diving right in. I think I needed some time to get used to being a part of the awesomeness that is the Queen Square neuroimaging hub. I also needed some time to reflect on the big picture of my research, this blog, and my overall social media presence.

But fear not! After the horrors of settling into London, I’m finally comfortable in my skin again with a new flat, a home office almost ready, and lots and lots of new ideas to share with you. I think part of my overall hesitancy was a kind of pondering just what I should be sharing. But I didn’t get this far by bottling up my research, so there isn’t much point in shuttering myself in now! I expect to be back to blogging in full form in the next week, as new projects here begin to get underway. But where is my research going?

The big picture will largely remain the same. I am interested as always in human consciousness, thought, self-awareness, and our capacity for growth along these dimensions. One thing I really love about my post-doc is that I’ve finally found a kind of thread weaving throughout my research all the way back to the days when I collected funny self-narratives in a broom closet at UCF. I think you could say I’m trying to connect the dots between how dynamic bodies shape and interact with our reflective minds, using the tools of perceptual decision making, predictive coding, and neuroimaging. Currently i’m developing a variety of novel experimental paradigms examining embodied self-awareness (i.e. our somatosensory, interoceptive, and affective sense of self), perceptual decision making and metacognition, and interrelations between these. You can expect to hear more about these topics soon.

Indeed, a principle reason I chose to join the FIL/ICN team was the unique emphasis and expertise here on predictive coding. My research has always been united by an interest in growth, plasticity, and change. During my PhD I came to see predictive coding/free energy schemes as a unifying framework under-which to unite our understanding of embodied and neural computation in terms of our ability to learn from new experiences.  As such I’m very happy to be in a place where not only can I be on the cutting edge of theoretical development, but also receive first-hand training in applying the latest computational modelling, connectivity, and multi-modal imaging techniques to my research questions. As always, given my obvious level of topical ADHD, you can be sure to expect coverage of a wide-range of cogneuro and cogsci topics.

So in general, you can expect posts covering these topics, my upcoming results, and general musings along these lines. As always i’m sure there will be plenty of methodsy nitpicking and philosophical navel gathering. In particular, my recent experience with a reviewer insisting that ‘embodiment’ = interoception has me itching to fire off a theoretical barrage – but I guess I should wait to publish that paper before taking to the streets. In the near future I have planned a series of short posts covering some of the cool posters and general themes I observed at the Vision Sciences Society conference this fall.

Finally, for my colleagues working on mindfulness and meditation research, a brief note. As you can probably gather, I don’t intend to return to this domain of study in the near future. My personal opinion of that topic is that it has become incredibly overhyped and incestous- the best research simply isn’t rising to the top. I know that many of the leaders in that community are well aware of that problem and are working to correct it, but for me I knew it was time to part ways and return to more general research. I do believe that mindfulness has an important role to play in both self-awareness and well-being, and hope that the models I am currently developing might one day further refine our understanding of these practices. However, I guess it’s worth noting that for me, meditation was always more of a kind of Varellian way to manipulate plasticity and consciousness rather than an end in itself; as I no longer buy into the enactive/neurophenomenological paradigm, I guess it’s self explanatory that I would be moving on to other things (like actual consciousness studies! :P). I do hope to see that field continue to grow and mature, and look forward to fruitful collaborations along those lines.

 

That’s it folks! Prepare yourself for a new era of neuroconscience :) Cheers to an all new year, all new research, and new directions! Viva la awareness!

 

 

Effective connectivity or just plumbing? Granger Causality estimates highly reliable maps of venous drainage.

update: for an excellent response to this post, see the comment by Anil Seth at the bottom of this article. Also don’t miss the extended debate regarding the general validity of causal methods for fMRI at Russ Poldrack’s blog that followed this post. 

While the BOLD signal can be a useful measurement of brain function when used properly, the fact that it indexes blood flow rather than neural activity raises more than a few significant concerns. That is to say, when we make inferences on BOLD, we want to be sure the observed effects are causally downstream of actual neural activity, rather than the product of physiological noise such as fluctuations in breath or heart rate. This is a problem for all fMRI analyses, but is particularly tricky for resting state fMRI, where we are interested in signal fluctuations that fall in the same range as respiration and pulse. Now a new study has extended these troubles to granger causality modelling (GCM), a lag-based method for estimating causal interactions between time series, popular in the resting state literature. Just how bad is the damage?

In an article published this week in PLOS ONE, Webb and colleagues analysed over a thousand scans from the Human Connectome database, examining the reliability of GCM estimates and the proximity of the major ‘hubs’ identified by GCM with known major arteries and veins. The authors first found that GCM estimates were highly robust across participants:

Plot showing robustness of GCM estimates across 620 participants. The majority of estimated causes did not show significant differences within or between participants (black datapoints).

Plot showing robustness of GCM estimates across 620 participants. The majority of estimated causes did not show significant differences within or between participants (black datapoints).

They further report that “the largest [most robust] lags are for BOLD Granger causality differences for regions close to large veins and dural venous sinuses”. In other words, although the major ‘upstream’ and ‘downstream’ nodes estimated by GCM are highly robust across participants, regions primarily effecting other regions (e.g. causal outflow) map onto major arteries, whereas regions primarily receiving ‘inputs’  (e.g.  causal inflow) map onto veins. This pattern of ‘causation’ is very difficult to explain as anything other than a non-neural artifact, as it seems like the regions mostly ‘causing’ activity in others are exactly where you would have fresh blood coming into the brain, and regions primarily being influenced by others seem to be areas of major blood drainage. Check out the arteriogram and venogram provided by the authors:

Depiction of major arteries (top image) and veins (bottom). Not overlap with areas of greatest G-cause (below).

Depiction of major arteries (top image) and veins (bottom). Note overlap with areas of greatest G-cause (below).

Compare the above to their thresholded z-statistic map for significant granger causality; white are areas of significant g-causation overlapping with an ateriogram mask, green are significant areas overlapping with a venogram mask:

journal.pone.0084279.g005

From paper:
“Figure 5. Mean Z-statistic for significant Granger causality differences to seed ROIs. Z-statistics were averaged for a given target ROI with the 264 seed ROIs to which it exhibited significantly asymmetric Granger causality relationship. Masks are overlaid for MRI arteriograms (white) and MRI venograms (green) for voxels with greater than 2 standard deviations signal intensity of in-brain voxels in averaged images from 33 (arteriogram) and 34 (venogram) subjects. Major arterial inflow and venous outflow distributions are labeled.”

It’s fairly obvious from the above that a significant proportion of the areas typically G-causing other areas overlap with arteries, whereas areas typically being g-caused by others overlap with veins. This is a serious problem for GCM of resting state fMRI, and worse, these effects were also observed for a comprehensive range of task-based fMRI data. The authors come to the grim conclusion that “Such arterial inflow and venous drainage has a highly reproducible pattern across individuals where major arterial and venous distributions are largely invariant across subjects, giving the illusion of reliable timing differences between brain regions that may be completely unrelated to actual differences in effective connectivity”. Importantly, this isn’t the first time GCM has been called into question. A related concern is the impact of spatial variation in the lag between neural activation and the BOLD response (the ‘hemodynamic response function’, HRF) across the brain. Previous work using simultaneous intracranial and BOLD recordings has shown that due to these lags, GCM can estimate a causal pattern of A then B, whereas the actual neural activity was B then A.

This is because GCM acts in a relatively simple way; given two time-series (A & B), if a better estimate of the future state of B can be predicted by the past fluctation of both A and B than that provided by B alone, then A is said to G-cause B.  However, as we’ve already established, BOLD is a messy and complex signal, where neural activity is filtered through slow blood fluctuations that must be carefully mapped back onto to neural activity using deconvolution methods. Thus, what looks like A then B in BOLD, can actually be due to differences in HRF lags between regions – GCM is blind to this as it does not consider the underlying process producing the time-series. Worse, while this problem can be resolved by combining GCM (which is naïve to the underlying cause of the analysed time series) with an approach that de-convolves each voxel-wise time-series with a canonical HRF, the authors point out that such an approach would not resolve the concern raised here that granger causality largely picks up macroscopic temporal patterns in blood in- and out-flow:

“But even if an HRF were perfectly estimated at each voxel in the brain, the mechanism implied in our data is that similarly oxygenated blood arrives at variable time points in the brain independently of any neural activation and will affect lag-based directed functional connectivity measurements. Moreover, blood from one region may then propagate to other regions along the venous drainage pathways also independent of neural to vascular transduction. It is possible that the consistent asymmetries in Granger causality measured in our data may be related to differences in HRF latency in different brain regions, but we consider this less likely given the simpler explanation of blood moving from arteries to veins given the spatial distribution of our results.”

As for correcting for these effects, the authors suggest that a nuisance variable approach estimating vascular effects related to pulse, respiration, and breath-holding may be effective. However, they caution that the effects observed here (large scale blood inflow and drainage) take place over a timescale an order of magnitude slower than actual neural differences, and that this approach would need extremely precise estimates of the associated nuisance waveforms to prevent confounded connectivity estimates. For now, I’d advise readers to be critical of what can actually  be inferred from GCM until further research can be done, preferably using multi-modal methods capable of directly inferring the impact of vascular confounds on GCM estimates. Indeed, although I suppose am a bit biased, I have to ask if it wouldn’t be simpler to just use Dynamic Causal Modelling, a technique explicitly designed for estimating causal effects between BOLD timeseries, rather than a method originally designed to estimate influences between financial stocks.

References for further reading:

Friston, K. (2009). Causal modelling and brain connectivity in functional magnetic resonance imaging. PLoS biology, 7(2), e33. doi:10.1371/journal.pbio.1000033

Friston, K. (2011). Dynamic causal modeling and Granger causality Comments on: the identification of interacting networks in the brain using fMRI: model selection, causality and deconvolution. NeuroImage, 58(2), 303–5; author reply 310–1. doi:10.1016/j.neuroimage.2009.09.031

Friston, K., Moran, R., & Seth, A. K. (2013). Analysing connectivity with Granger causality and dynamic causal modelling. Current opinion in neurobiology, 23(2), 172–8. doi:10.1016/j.conb.2012.11.010

Webb, J. T., Ferguson, M. a., Nielsen, J. a., & Anderson, J. S. (2013). BOLD Granger Causality Reflects Vascular Anatomy. (P. A. Valdes-Sosa, Ed.)PLoS ONE, 8(12), e84279. doi:10.1371/journal.pone.0084279

Chang, C., Cunningham, J. P., & Glover, G. H. (2009). Influence of heart rate on the BOLD signal: the cardiac response function. NeuroImage, 44(3), 857–69. doi:10.1016/j.neuroimage.2008.09.029

Chang, C., & Glover, G. H. (2009). Relationship between respiration, end-tidal CO2, and BOLD signals in resting-state fMRI. NeuroImage, 47(4), 1381–93. doi:10.1016/j.neuroimage.2009.04.048

Lund, T. E., Madsen, K. H., Sidaros, K., Luo, W.-L., & Nichols, T. E. (2006). Non-white noise in fMRI: does modelling have an impact? Neuroimage, 29(1), 54–66.

David, O., Guillemain, I., Saillet, S., Reyt, S., Deransart, C., Segebarth, C., & Depaulis, A. (2008). Identifying neural drivers with functional MRI: an electrophysiological validation. PLoS biology, 6(12), 2683–97. doi:10.1371/journal.pbio.0060315

Update: This post continued into an extended debate on Russ Poldrack’s blog, where Anil Seth made the following (important) comment 

Hi this is Anil Seth.  What an excellent debate and I hope I can add few quick thoughts of my own since this is an issue close to my heart (no pub intended re vascular confounds).

First, back to the Webb et al paper. They indeed show that a vascular confound may affect GC-FMRI but only in the resting state and given suboptimal TR and averaging over diverse datasets.  Indeed I suspect that their autoregressive models may be poorly fit so that the results rather reflect a sort-of mental chronometry a la Menon, rather than GC per se.
In any case the more successful applications of GC-fMRI are those that compare experimental conditions or correlate GC with some behavioural variable (see e.g. Wen et al.http://www.ncbi.nlm.nih.gov/pubmed/22279213).  In these cases hemodynamic and vascular confounds may subtract out.
Interpreting findings like these means remembering that GC is a description of the data (i.e. DIRECTED FUNCTIONAL connectivity) and is not a direct claim about the underlying causal mechanism (e.g. like DCM, which is a measure of EFFECTIVE connectivity).  Therefore (model light) GC and (model heavy) DCM are to a large extent asking and answering different questions, and to set them in direct opposition is to misunderstand this basic point.  Karl, Ros Moran, and I make these points in a recent review (http://www.ncbi.nlm.nih.gov/pubmed/23265964).
Of course both methods are complex and ‘garbage in garbage out’ applies: naive application of either is likely to be misleading or worse.  Indeed the indirect nature of fMRI BOLD means that causal inference will be very hard.  But this doesn’t mean we shouldn’t try.  We need to move to network descriptions in order to get beyond the neo-phrenology of functional localization.  And so I am pleased to see recent developments in both DCM and GC for fMRI.  For the latter, with Barnett and Chorley I have shown that GC-FMRI is INVARIANT to hemodynamic convolution given fast sampling and low noise (http://www.ncbi.nlm.nih.gov/pubmed/23036449).  This counterintuitive finding defuses a major objection to GC-fMRI and has been established both in theory, and in a range of simulations of increasing biophysical detail.  With the development of low-TR multiband sequences, this means there is renewed hope for GC-fMRI in practice, especially when executed in an appropriate experimental design.  Barnett and I have also just released a major new GC software which avoids separate estimation of full and reduced AR models, avoiding a serious source of bias afflicting previous approaches (http://www.ncbi.nlm.nih.gov/pubmed/24200508).
Overall I am hopeful that we can move beyond premature rejection of promising methods on the grounds they fail when applied without appropriate data or sufficient care.  This applies to both GC and fMRI. These are hard problems but we will get there.

Birth of a New School: PDF version and Scribus Template!

As promised, today we are releasing a copy-edited PDF of my “Birth of a New School” essay, as well as a Scribus template that anyone can use to quickly create their own professional quality PDF manuscripts. Apologies for the lengthy delay, as i’ve been in the middle of a move to the UK. We hope folks will iterate and optimize these templates for a variety of purposes, especially post-publication peer review, commentary, pre-registration, and more. Special thanks to collaborator Kate Mills, who used Scribus to create the initial layout. You might notice we deliberately styled the manuscript around the format of one of those Big Sexy Journals (see if you can guess which one). I’ve heard this elaborate process should cost somewhere in the tens of thousands of dollars per article, so I guess I owe Kate a few lunches! Seriously though, the entire copy-editing and formatting process only took about 3 or 4 hours total (most of which was just getting used to the Scribus interface), less than the time you would spend formatting and reformatting your article for a traditional publisher. With a little practice Scribus or similar tools can be used to quickly turn out a variety of high quality article types.

Here is the article on Figshare, and the direct download link:

Screen Shot 2013-12-12 at 11.50.42

The formatted manuscript. Easy!

What do you think? Personally, I’m really pleased with it! We’ve also gone ahead and uploaded the Scribus template to Figshare. You can use this to easily publish your own post-publication peer reviews, commentaries, and whatever else you like. Just copy-paste your own text into the text fields, replace the images, upload to Figshare or a similiar service, and you are good to go! In general Scribus is a really awesome open source tool for publishing, both easy to learn and cross platform. Another great alternative is Fidus. For now we’re still not exactly sure how to generate citations – in theory if you format your manuscripts according to these guidelines, Google Scholar will pick them up anywhere on the net and generate alerts. For now we are recommending everyone upload their self-publications to Figshare or a similar service, who are already working on a streamlined citation generation scheme. We hope you find these useful; now go out and publish some research!

The template:

An easy to use Scribus template for self-publishing

Our Scribus template, for quick creation of research proofs.

Storify: twitter tears apart “the neuroscientist who was a psychopath” story

Follow

Get every new post delivered to your Inbox.

Join 11,799 other followers

%d bloggers like this: