Neuroconscience

The latest thoughts, musings, and data in cognitive science and neuroscience.

Tag: Neuroscience

When is expectation not a confound? On the necessity of active controls.

Learning and plasticity are hot topics in neuroscience. Whether exploring old world wisdom or new age science fiction, the possibility that playing videogames might turn us into attention superheroes or that practicing esoteric meditation techniques might heal troubled minds is an exciting avenue for research. Indeed findings suggesting that exotic behaviors or novel therapeutic treatments might radically alter our brain (and behavior) are ripe for sensational science-fiction headlines purporting vast brain benefits.  For those of you not totally bored of methodological crisis, here we have one brewing anew. You see the standard recommendation for those interested in intervention research is the active-controlled experimental design. Unfortunately in both clinical research on psychotherapy (including meditation) and more Sci-Fi areas of brain training and gaming, use of active controls is rare at best when compared to the more convenient (but causally ineffective) passive control group. Now a new article in Perspectives in Psychological Science suggests that even standard active controls may not be sufficient to rule out confounds in the treatment effect of interest.

Why is that? And why exactly do we need  active controls in the first place? As the authors clearly point out, what you want to show with such a study is the causal efficacy of the treatment of interest. Quite simply what that means is that the thing you think should have some interesting effect should actually be causally responsible for creating that effect. If you want to argue that standing upside down for twenty minutes a day will make me better at playing videogames in Australia, it must be shown that it is actually standing upside down that causes my increased performance down under. If my improved performance on Minecraft Australian Edition is simply a product of my belief in the power of standing upside down, or my expectation that standing upside down is a great way to best kangaroo-creepers, then we have no way of determining what actually produced that performance benefit. Research on placebos and the power of expectations shows that these kinds of subjective beliefs can have a big impact on everything from attentional performance to mortality rates.

Useful flowchart from Boot et al on whether or not a study can make causal claims for treatment.

Useful flowchart from Boot et al on whether or not a study can make causal claims for treatment.

Typically researchers attempt to control for such confounds through the use of a control group performing a task as similar as possible to the intervention of interest. But how do we know participants in the two groups don’t end up with different expectations about how they should improve as a result of the training? Boot et al point out that without actually measuring these variables, we have no idea and no way of knowing for sure that expectation biases don’t produce our observed improvements. They then provide a rather clever demonstration of their concern, in an experiment where participants view videos of various cognition tests as well as videos of a training task they might later receive, in this case either the first-person shooter Unreal Tournament or the spatial puzzle game Tetris. Finally they asked the participants in each group which tests they thought they’d do better on as a result of the training video. Importantly the authors show that not only did UT and Tetris lead to significantly different expectations, but also that those expectation benefits were specific to the modality of trained and tested tasks. Thus participant who watched the action-intensive Unreal Tournament videos expected greater improvements on tests of reaction time and visual performance, whereas participants viewing Tetris rated themselves as likely to do better on tests of spatial memory.

This is a critically important finding for intervention research. Many researchers, myself included, have often thought of the expectation and demand characteristic confounds in a rather general way. Generally speaking until recently I wouldn’t have expected the expectation bias to go much beyond a general “I’m doing something effective” belief. Boot et al show that our participants are a good deal cleverer than that, forming expectations-for-improvement that map onto specific dimensions of training. This means that to the degree that an experimenter’s hypothesis can be discerned from either the training or the test, participants are likely to form unbalanced expectations.

The good news is that the authors provide several reasonable fixes for this dilemma. The first is just to actually measure participant’s expectations, specifically in relation to the measures of interest. Another useful suggestion is to run pilot studies ensuring that the two treatments do not evoke differential expectations, or similarly to check that your outcome measures are not subject to these biases. Boot and colleagues throw the proverbial glove down, daring readers to attempt experiments where the “control condition” actually elicits greater expectations yet the treatment effect is preserved. Further common concerns, such as worries about balancing false positives against false negatives, are address at length.

The entire article is a great read, timely and full of excellent suggestions for caution in future research. It also brought something I’ve been chewing on for some time quite clearly into focus. From the general perspective of learning and plasticity, I have to ask at what point is an expectation no longer a confound. Boot et al give an interesting discussion on this point, in which they suggest that even in the case of balanced expectations and positive treatment effects, an expectation dependent response (in which outcome correlates with expectation) may still give cause for concern as to the causal efficacy of the trained task. This is a difficult question that I believe ventures far into the territory of what exactly constitutes the minimal necessary features for learning. As the authors point out, placebo and expectations effects are “real” products of the brain, with serious consequences for behavior and treatment outcome. Yet even in the medical community there is a growing understanding that such effects may be essential parts of the causal machinery of healing.

Possible outcome of a training experiment, in which the control shows no dependence between expectation and outcome (top panel) and the treatment of interest shows dependence (bottom panel). Boot et al suggest that such a case may invalidate causal claims for treatment efficacy.

Possible outcome of a training experiment, in which the control shows no dependence between expectation and outcome (top panel) and the treatment of interest shows dependence (bottom panel). Boot et al suggest that such a case may invalidate causal claims for treatment efficacy.

To what extent might this also be true of learning or cognitive training? For sure we can assume that expectations shape training outcomes, otherwise the whole point about active controls would be moot. But can one really have meaningful learning if there is no expectation to improve? I realize that from an experimental/clinical perspective, the question is not “is expectation important for this outcome” but “can we observe a treatment outcome when expectations are balanced”. Still when we begin to argue that the observation of expectation-dependent responses in a balanced design might invalidate our outcome findings, I have to wonder if we are at risk of valuing methodology over phenomena. If expectation is a powerful, potentially central mechanism in the causal apparatus of learning and plasticity, we shouldn’t be surprised when even efficacious treatments are modulated by such beliefs. In the end I am left wondering if this is simply an inherent limitation in our attempt to apply the reductive apparatus of science to increasingly holistic domains.

Please do read the paper, as it is an excellent treatment of a critically ignored issue in the cognitive and clinical sciences. Anyone undertaking related work should expect this reference to appear in reviewer’s replies in the near future.

EDIT:
Professor Simons, a co-author of the paper, was nice enough to answer my question on twitter. Simons pointed out that a study that balanced expectation, found group outcome differences, and further found correlations of those differences with expectation could conclude that the treatment was causally efficacious, but that it also depends on expectations (effect + expectation). This would obviously be superior to an unbalanced designed or one without measurement of expectation, as it would actually tell us something about the importance of expectation in producing the causal outcome. Be sure to read through the very helpful FAQ they’ve posted as an addendum to the paper, which covers these questions and more in greater detail. Here is the answer to my specific question:

What if expectations are necessary for a treatment to work? Wouldn’t controlling for them eliminate the treatment effect?

No. We are not suggesting that expectations for improvement must be eliminated entirely. Rather, we are arguing for the need to equate such expectations across conditions. Expectations can still affect the treatment condition in a double-blind, placebo-controlled design. And, it is possible that some treatments will only have an effect when they interact with expectations. But, the key to that design is that the expectations are equated across the treatment and control conditions. If the treatment group outperforms the control group, and expectations are equated, then something about the treatment must have contributed to the improvement. The improvement could have resulted from the critical ingredients of the treatment alone or from some interaction between the treatment and expectations. It would be possible to isolate the treatment effect by eliminating expectations, but that is not essential in order to claim that the treatment had an effect.

In a typical psychology intervention, expectations are not equated between the treatment and control condition. If the treatment group improves more than the control group, we have no conclusive evidence that the ingredients of the treatment mattered. The improvement could have resulted from the treatment ingredients alone, from expectations alone, or from an interaction between the two. The results of any intervention that does not equate expectations across the treatment and control condition cannot provide conclusive evidence that the treatment was necessary for the improvement. It could be due to the difference in expectations alone. That is why double blind designs are ideal, and it is why psychology interventions must take steps to address the shortcomings that result from the impossibility of using a double blind design. It is possible to control for expectation differences without eliminating expectations altogether.

Active-controlled, brief body-scan meditation improves somatic signal discrimination.

Here in the science blog-o-sphere we often like to run to the presses whenever a laughably bad study comes along, pointing out all the incredible feats of ignorance and sloth. However, this can lead to science-sucks cynicism syndrome (a common ailment amongst graduate students), where one begins to feel a bit like all the literature is rubbish and it just isn’t worth your time to try and do something truly proper and interesting. If you are lucky, it is at this moment that a truly excellent paper will come along at the just right time to pick up your spirits and re-invigorate your work. Today I found myself at one such low-point, struggling to figure out why my data suck, when just such a beauty of a paper appeared in my RSS reader.

data_sensing (1)The paper, “Brief body-scan meditation practice improves somatosensory perceptual decision making”, appeared in this month’s issue of Consciousness and Cognition. Laura Mirams et al set out to answer a very simple question regarding the impact of meditation training (MT) on a “somatic signal detection task” (SSDT). The study is well designed; after randomization, both groups received audio CDs with 15 minutes of daily body-scan meditation or excerpts from The Lord of The Rings. For the SSD task, participants simply report when they felt a vibration stimulus on the finger, where the baseline vibration intensity is first individually calibrated to a 50% detection rate. The authors then apply a signal-detection analysis framework to discern the sensitivity or d’ and decision criteria c.

Mirams et al found that, even when controlling for a host of baseline factors including trait mindfulness and baseline somatic attention, MT led to a greater increase in d’ driven by significantly reduced false-alarms. Although many theorists and practitioners of MT suggest a key role for interoceptive & somatic attention in related alterations of health, brain, and behavior, there exists almost no data addressing this prediction, making these findings extremely interesting. The idea that MT should impact interoception and somatosensation is very sensible- in most (novice) meditation practices it is common to focus attention to bodily sensations of, for example, the breath entering the nostril. Further, MT involves a particular kind of open, non-judgemental awareness of bodily sensations, and in general is often described to novice students as strengthening the relationship between the mind and sensations of the body. However, most existing studies on MT investigate traditional exteroceptive, top-down elements of attention such as conflict resolution and the ability to maintain attention fixation for long periods of time.

While MT certainly does involve these features, it is arguable that the interoceptive elements are more specific to the precise mechanisms of interest (they are what you actually train), whereas the attentional benefits may be more of a kind of side effect, reflecting an early emphasis in MT on establishing attention. Thus in a traditional meditation class, you might first learn some techniques to fixate your attention, and then later learn to deploy your attention to specific bodily targets (i.e. the breath) in a particular way (non-judgmentally). The goal is not necessarily to develop a super-human ability to filter distractions, but rather to change the way in which interoceptive responses to the world (i.e. emotional reactions) are perceived and responded to. This hypothesis is well reflected in the elegant study by Mirams et al; they postulate specifically that MT will lead to greater sensitivity (d’), driven by reduced false alarms rather than an increased hit-rate, reflecting a greater ability to discriminate the nature of an interoceptive signal from noise (note: see comments for clarification on this point by Steve Fleming – there is some ambiguity in interpreting the informational role of HR and FA in d’). This hypothesis not only reflects the theoretically specific contribution of MT (beyond attention training, which might be better trained by video games for example), but also postulates a mechanistically specific hypothesis to test this idea, namely that MT leads to a shift specifically in the quality of interoceptive signal processing, rather than raw attentional control.

At this point, you might ask if everyone is so sure that MT involves training interoception, why is there so little data on the topic? The authors do a great job reviewing findings (even including currently in-press papers) on interoception and MT. Currently there is one major null finding using the canonical heartbeat detection task, where advanced practitioners self-reported improved heart beat detection but in reality performed at chance. Those authors speculated that the heartbeat task might not accurately reflect the modality of interoception engaged in by practitioners. In addition a recent study investigated somatic discrimination thresholds in a cross-section of advanced practitioners and found that the ability to make meta-cognitive assessments of ones’ threshold sensitivity correlated with years of practice. A third recent study showed greater tactile sensation acuity in practitioners of Tai Chi.  One longitudinal study [PDF], a wait-list controlled fMRI investigation by Farb et al, found that a mindfulness-based stress reduction course altered BOLD responses during an attention-to-breath paradigm. Collectively these studies do suggest a role of MT in training interoception. However, as I have complained of endlessly, cross-sections cannot tell us anything about the underlying causality of the observed effects, and longitudinal studies must be active-controlled (not waitlisted) to discern mechanisms of action. Thus active-controlled longitudinal designs are desperately needed, both to determine the causality of a treatment on some observed effect, and to rule out confounds associated with motivation, demand-characteristic, and expectation. Without such a design, it is very difficult to conclude anything about the mechanisms of interest in an MT intervention.

In this regard, Mirams went above and beyond the call of duty as defined by the average paper. The choice of delivering the intervention via CD is excellent, as we can rule out instructor enthusiasm/ability confounds. Further the intervention chosen is extremely simple and well described; it is just a basic body-scan meditation without additional fluff or fanfare, lending to mechanistic specificity. Both groups were even instructed to close their eyes and sit when listening, balancing these often overlooked structural factors. In this sense, Mirams et al have controlled for instruction, motivation, intervention context, baseline trait mindfulness, and even isolated the variable of interest- only the MT group worked with interoception, though both exerted a prolonged period of sustained attention. Armed with these controls we can actually say that MT led to an alteration in interoceptive d’, through a mechanism dependent upon on the specific kind of interoceptive awareness trained in the intervention.

It is here that I have one minor nit-pick of the paper. Although the use of Lord of the Rings audiotapes is with precedent, and likely a great control for attention and motivation, you could be slightly worried that reading about Elves and Orcs is not an ideal control for listening to hours of tapes instructing you to focus on your bodily sensations, if the measure of interest involves fixating on the body. A pure active control might have been a book describing anatomy or body parts; then we could exhaustively conclude that not only is it interoception driving the findings, but the particular form of interoceptive attention deployed by meditation training. As it is, a conservative person might speculate that the observed differences reflect demand characteristics- MT participants deploy more attention to the body due to a kind of priming mechanism in the teaching. However this is an extreme nitpick and does not detract from the fact that Mirams and co-authors have made an extremely useful contribution to the literature. In the future it would be interesting to repeat the paradigm with a more body-oriented control, and perhaps also in advanced practitioners before and after an intensive retreat to see if the effect holds at later stages of training. Of course, given my interest in applying signal-detection theory to interoceptive meta-cognition, I also cannot help but wonder what the authors might have found if they’d applied a Fleming-style meta-d’ analysis to this study.

All in all, a clear study with tight methods, addressing a desperately under-developed research question, in an elegant fashion. The perfect motivation to return to my own mangled data ☺

Quick post – Dan Dennett’s Brain talk on Free Will vs Moral Responsibility

As a few people have asked me to give some impression of Dan’s talk at the FIL Brain meeting today, i’m just going to jot my quickest impressions before I run off to the pub to celebrate finishing my dissertation today. Please excuse any typos as what follows is unedited! Dan gave a talk very similar to his previous one several months ago at the UCL philosophy department. As always Dan gave a lively talk with lots of funny moments and appeals to common sense. Here the focus was more on the media activities of neuroscientists, with some particularly funny finger wagging at Patrick Haggard and Chris Frith. Some good bits where his discussion of evidence that priming subjects against free will seems to make them more likely to commit immoral acts (cheating, stealing) and a very firm statement that neuroscience is being irresponsible complete with bombastic anti-free will quotes by the usual suspects. Although I am a bit rusty on the mechanics of the free will debate, Dennett essentially argued for a compatiblist  view of free will and determinism. The argument goes something like this: the basic idea that free will is incompatible with determinism comes from a mythology that says in order to have free will, an agent must be wholly unpredictable. Dennett argues that this is absurd, we only need to be somewhat unpredictable. Rather than being perfectly random free agents, Dennett argues that what really matters is moral responsibility pragmatically construed.  Dennett lists a “spec sheet” for constructing a morally responsible agent including “could have done otherwise, is somewhat unpredictable, acts for reasons, is subject to punishment…”. In essence Dan seems to be claiming that neuroscientists don’t really care about “free will”, rather we care about the pragmatic limits within which we feel comfortable entering into legal agreements with an agent. Thus the job of the neuroscientists is not to try to reconcile the folk and scientific views of “free will”, which isn’t interesting (on Dennett’s acocunt) anyway, but rather to describe the conditions under which an agent can be considered morally responsible. The take home message seemed to be that moral responsibility is essentially a political rather than metaphysical construct. I’m afraid I can’t go into terrible detail about the supporting arguments- to be honest Dan’s talk was extremely short on argumentation. The version he gave to the philosophy department was much heavier on technical argumentation, particularly centered around proving that compatibilism doesn’t contradict with “it could have been otherwise”. In all the talk was very pragmatic, and I do agree with the conclusions to some degree- that we ought to be more concerned with the conditions and function of “will” and not argue so much about the meta-physics of “free”. Still my inner philosopher felt that Dan is embracing some kind of basic logical contradiction and hand-waving it away with funny intuition pumps, which for me are typically unsatisfying.

For reference, here is the abstract of the talk:

Nothing—yet—in neuroscience shows we don’t have free will

Contrary to the recent chorus of neuroscientists and psychologists declaring that free will is an illusion, I’ll be arguing (not for the first time, but with some new arguments and considerations) that this familiar claim is so far from having been demonstrated by neuroscience that those who advance it are professionally negligent, especially given the substantial social consequences of their being believed by lay people. None of the Libet-inspired work has the drastic implications typically adduced, and in fact the Soon et al (2008) work, and its descendants, can be seen to demonstrate an evolved adaptation to enhance our free will, not threaten it. Neuroscientists are not asking the right questions about free will—or what we might better call moral competence—and once they start asking and answering the right questions we may discover that the standard presumption that all “normal” adults are roughly equal in moral competence and hence in accountability is in for some serious erosion. It is this discoverable difference between superficially similar human beings that may oblige us to make major revisions in our laws and customs. Do we human beings have free will? Some of us do, but we must be careful about imposing the obligations of our good fortune on our fellow citizens wholesale.

Uta Frith – The Curious Brain in the Museum

It’s not everyday that collaborations between the humanities and sciences lead to tangible fruits- but I’m excited to share with you one case in which they did, with surprisingly cute results! Leading development psychologist and Interacting Minds Research Foundation Professor, Uta Frith recently gave the Victoria and Albert Museum’s 2010 Henry Cole lecture. Below you will find the power-point slides from this talk, in which she discussed the relationship between her recent work on social learning and the experience of a museum. Interestingly, a film maker was inspired to put together the following short film, “The Curious Brain in the Museum.” It’s a very well done film and a fascinating look at the museum through Uta’s eyes.

Here are the slides from the talk:

And the resulting video:

In this short film, specially commissioned as part of the Royal Society’s 350th anniversary celebrations in 2010, Professor Uta Frith FRS and her young companion, Amalie Heath-Born, find out just what goes on inside our brains when we view the treasures on display at London’s world-famous Victoria and Albert Museum.

“The human mind/brain is exquisitely social and automatically responds to signals sent by other people. These signals can be artfully designed objects, and these can come from people long in the past. The art and design that is embodied in the object can evoke in the brain different streams of imagination: how it was made, the value it represents, and the meaning it conveys. The human mind/brain has ancient reward systems, which respond to, say, stimuli signaling food to the hungry, but also respond to social stimuli signaling relevance to the curious. This makes for a never ending well spring of spontaneous teaching and learning. Education in the museum environment is perfectly attuned to the curious mind.”  Uta Frith (2010)

You can read more about the event and the film on the Royal Society page.

Intrinsic correlations between Salience, Primary Sensory, and Default Mode Networks following MBSR

Going through my RSS backlog today, I was excited to see Kilpatrick et al.’s “Impact of Mindfulness-Based Stress Reduction Training on Intrinsic Brain Connectivity” appear in this week’s early view Neuroimage. Although I try to keep my own work focused on primary research in cognition and connectivity, mindfulness-training (MT) is a central part of my research. Additionally, there are few published findings on intrinsic connectivity in this area. Previous research has mainly focused on between-group differences in anatomical structure (gray and white matter for example) and task-related activity. A few more recent studies have gone as far as to randomize participants into wait-listed control and MT groups.

While these studies are interesting, they are of course limited in their scope by several factors. My supervisor Antoine Lutz emphasizes that in addition to our active-controlled research here in Århus, his group at Wisconsin-Madison and others are actively preparing such datasets. Active controls are simply ‘mock’ interventions (or real ones) designed to control for every possible aspect of being involved in an intervention (placebo, community, motivation) in order to isolate the variables specific to that treatment (in this case, meditation, but not sitting, breathing, or feeling special).  Active controls are important as there is a great deal of research demonstrating that cognition itself is susceptible to placebo-like motivational effects. All and all, I’ve seen several active-controlled, cognitive-behavioral studies in review that suggest we should be strongly skeptical of any non-active controlled findings. While I can’t discuss these in detail, I will mention some of these issues in my review of the neuroimage manuscript. It suffices to say however, that if you are working on a passive-controlled study in this area, you had better get it out fast as you can expect reviewers to be greatly tightening their expectations in the coming months, as more and more rigorous papers appear. As Sara Lazar put it during my visit to her lab last summer “the low-hanging fruit of MBSR brain research are rapidly vanishing”. Overall this is a good thing for the community and we’ll see why in a moment.

Now let us turn to the paper at hand. Kilpatrick et al start with a standard summary of MBSR and rsfMRI research, focusing on findings indicating MBSR trains focused attention, sensory introspection/interception and perception. They briefly review now well-established findings indicating that rsfMRI is sensitive to training related changes, including studies that demonstrate the sensitivity of the resting state to conditions such as fatigue, eyes-open vs eyes-closed, and recent sleep. This is all pretty well and good, but I think it’s a bit odd when we see just how they collect their data.

Briefly, they recruited 32 healthy adults for randomization to MBSR and waitlist controls. Controls then complete the Mindfulness Attention Awareness Scale (MAAS) and receive 8 weeks of diary-logged standard MBSR training. After training, participants are recalled for the rsfMRI scan. An important detail here is that participants are not scanned before and after training, rendering the fMRI portion of the experiment closer to a cross-section than true longitudinal design. At the time of scan, the researchers then give two ‘task-free states’, with and without auditory white noise. The authors indicate that the noise condition is included “to enable new analysis methods not conducted here”, presumably to average out scanner-noise related affects. They later indicate no differences between the two conditions, which causes me to ask how much here is meditation vs focusing-on-scanner-noise specific. Finally, they administer the ‘task free’ states with a slight twist:

“”During this baseline scan of about 5 min, we would like you to again stay as still as possible and be mindfully aware of your surroundings. Please keep your eyes closed during this procedure. Continue to be mindfully aware of whatever you notice in your surroundings and your own sensations. Mindful awareness means that you pay attention to your present moment experience, in this case the changing sounds of the scanner/changing background sounds played through the headphones, and to bring interest and curiosity to how you are responding to them.”

While the manipulation makes sense given the experimenter’s hypothesis concerning sensory processing, an ongoing controversy in resting-state research is just what it is that constitutes ‘rest’. Research here suggests that functional connectivity is sensitive to task-instructions and variations in visual stimulation, and many complain about the lack of specificity within different rest conditions. Kilpatrick et al’s manipulation makes sense given that what they really want to see is meditation-related alterations, but it’s a dangerous leap without first establishing the relationship between ‘true rest’ and their ‘auditory meditation’ condition. Research on the impact of scanner-noise indicates some degree of noise-related nuisance effects, and also some functionally significant effects. If you’ve never been in an MR experiment, the scanner is LOUD. During my first scan I actually started feeling claustrophobic due to the oppressive machine-gun like noise of the gradient coil. Anyway, it’s really troubling that Kilpatrick et al don’t include a totally task-free set for comparison, and I’m hesitant to call this a resting-state finding without further clarification.

The study is extremely interesting, but it’s important to note it’s limitations:

  1. Lack of active control- groups are not controlled for motivation.
  2. No pre/post scan.
  3. Novel resting state without comparison condition.
  4. Findings are discussed as ‘training related’ without report of correlation with reported practice hours.
  5. Anti-correlations reported with global-signal nuisance regression. No discussion of possible regression related inducement (see edit).
  6. Discussion of findings is unclear; reported as greater DMN x Auditory correlation, but the independent component includes large portions of the salience network.

Ultimately they identify a “auditory/salience” independent component network (ICN) (primary auditory, STG, posterior Insula, ACC, and lateral frontal cortex) and then conduct seed-regression analyses of the network with areas of the DMN and Dorsal Attention Network (DAN). I find it highly strange that they pick up a network that seems to conflate primary sensory and salience regions, as do the researchers who state “Therefore, the ICN was labeled as “auditory/salience”. It is unclear why the components split differently in our sample, perhaps the instructions that brought attention to auditory input altered the covariance structure somewhat.” Given the lack of motivational control in the study, the issues in this study begin to pile onto one another and I am not sure what we can really conclude. They further find that the MBSR group demonstrates greater “auditory/salience x DMN connectivity”, “greater visual and auditory functional connectivity” (see image below). They also report several increased anti-correlations, between the aud/sal network, dMPFC and visual regions. I find this to be an extremely tantalizing finding as it would reflect a decrease in processing automaticity amongst the SAL, CEN, and DMN networks. There are some serious problems with these kinds of analysis that the authors don’t address, and so we again must reserve any strong conclusions. Here is what Kilpatrick et al conclude:

“The current findings extend the results of prior studies that showed meditation-related changes in specific brain regions active during attention and sensory processing by providing evidence that MBSR trained compared to untrained subjects, during a focused attention instruction, have increased connectivity within sensory networks and between regions associated with attentional processes and those in the attended sensory cortex. In addition they show greater differentiation between regions associated with attentional processes and the unattended sensory cortex as well as greater differentiation between attended and unattended sensory networks”

As is typical, the list of findings is quite long and I won’t bother re-stating it all here. Given the resting instructions it seems clear that the freshly post-MBSR participants are likely to have engaged a pretty dedicated set of cognitive operations during the scan. Yet it’s totally unclear what the control group would do given these contemplative instructions. Presumably they’d just lie in the scanner and try not to tune out the noise- but you can see here how it’s not clear that these conditions are really that comparable without having some idea of what’s going on. In essence what you (might) have here is one group actually doing something (meditation) and the other group not doing much at all. Ideally you want to see how training impacts the underlying process in a comparable way. Motivation has been repeatedly linked to BOLD signal intensity and in this case, it could very well be that these findings are simple artifacts of motivation to perform. If one group is actually practicing mindfulness and the other isn’t, you have not isolated the variable of interest. The authors could have somewhat alleviated this by including data from the additional pain task (“not reported here”) and/or at least giving us a correlation of the findings with the MAAS scale. I emphasize that I do find the findings of this paper interesting- they map extremely well onto my own hypotheses about how RSNs interact with mindfulness training, but that we must interpret them with caution.

Overall I think this was a project with a strong theoretical motivation and some very interesting ideas. One problem with looking at state-mindfulness in the scanner is the cramped, noisy environment. I think Kilpatrick et al had a great idea in their attempt to use the noise itself as a manipulation. Further, the findings make a good deal of sense. Still, given the above limitation, it’s important to be really careful with our conclusions. At best, this study warrants an extremely rigorous follow-up, and I wish neuroimage had published it with a bit more information, such as the status of any rest-MAAS correlations. Anyway, this post has gotten quite long and I think I’d best get back to work- for my next post I think I’ll go into more detail about some of the issues confront resting state (what is “rest”?) and mindfulness (role of active controls for community, motivation, and placebo effects) and what they mean for resting-state research.

edit: just realized I never explained limitation #5. See my “beautiful noise” slides (previous post) regarding the controversy of global signal regression and anti-correlation. Simply put, there is somewhat convincing evidence that this procedure (designed to eliminate low-frequency nuisance co-variates) may actually mathematically induce anti-correlations where none exist, probably due to regression to the mean. While it’s not a slam-dunk (see response by Fox et al), it’s an extremely controversial area and all anti-correlative findings should be interpreted in light of this possibility.

If you like this post please let me know in the comments! If I can get away with rambling about this kind of stuff, I’ll do so more frequently.

My response to Carr and Pinker on Media Plasticity

Our ongoing discussion regarding the moral panic surrounding Nicolas Carr’s book The Shallows continues over at Carr’s blog today, with his recent response to Pinker’s slamming the book. I maintain that there are good and bad (frightening!!) things in both accounts. Namely, Pinker’s stolid refusal to acknowledge the research I’ve based my entire PhD on, and Carr’s endless fanning of the one-sided moral panic.

Excerpt from Carr’s Blog:

Steven Pinker and the Internet

And then there’s this: “It’s not as if habits of deep reflection, thorough research and rigorous reasoning ever came naturally to people.” Exactly. And that’s another cause for concern. Our most valuable mental habits – the habits of deep and focused thought – must be learned, and the way we learn them is by practicing them, regularly and attentively. And that’s what our continuously connected, constantly distracted lives are stealing from us: the encouragement and the opportunity to practice reflection, introspection, and other contemplative modes of thought. Even formal research is increasingly taking the form of “power browsing,” according to a 2008 University College London study, rather than attentive and thorough study. Patricia Greenfield, a professor of developmental psychology at UCLA, warned in a Science article last year that our growing use of screen-based media appears to be weakening our “higher-order cognitive processes,” including “abstract vocabulary, mindfulness, reflection, inductive problem solving, critical thinking, and imagination.”

As someone who has enjoyed and learned a lot from Steven Pinker’s books about language and cognition, I was disappointed to see the Harvard psychologist write, in Friday’s New York Times, a cursory op-ed column about people’s very real concerns over the Internet’s influence on their minds and their intellectual lives. Pinker seems to dismiss out of hand the evidence indicating that our intensifying use of the Net and related digital media may be reducing the depth and rigor of our thoughts. He goes so far as to assert that such media “are the only things that will keep us smart.” And yet the evidence he offers to support his sweeping claim consists largely of opinions and anecdotes, along with one very good Woody Allen joke.

Right here I would like to point out the kind of leap Carr is making. I’d really like a closer look at the supposed evidence demonstrating  “our intensifying use of the Net and related digital media may be reducing the depth and rigor of our thoughts.” This is a huge claim! How does one define the ‘depth’ and ‘rigor’ of our thoughts? I know of exactly one peer-reviewed high impact paper demonstrating a loss of specifically executive function in heavy-media multi-taskers. While there is evidence that generally speaking, multi-tasking can interfere with some forms of goal-directed activity, I am aware of no papers directly linking specific forms of internet behavior to a drop in executive function. Furthermore, the HMM paper included in it’s measure of multi-tasking ‘watching tv’, ‘viewing funny videos’, and ‘playing videogames’. I don’t know about you, but for me there is definitely a difference between ‘work’ multitasking, in which I focus and work through multiple streams, and ‘play’ multitasking, in which I might casually surf the net while watching TV. The second claim is worse- what exactly is ‘depth’? And how do we link it to executive functioning?

Is Carr claiming people with executive function deficits are incapable or impaired in thinking creatively? If it takes me 10 years to publish a magnum opus, have I thought less deeply than the author that cranks out a feature length popular novel every 2 years? Depth involves a normative judgment of what separates ‘good’ thinking from ‘bad’ thinking, and to imply there is some kind of peer-reviewed consensus here is patently false. In fact, here is a recent review paper on fmri creativity research (is this depth?) indicating that the existing research is so incredibly disparate and poorly defined as to be untenable. That’s the problem with Carr’s claims- he oversimplifies both the diversity of internet usage and the existing research on executive and creative function. To be fair to Carr, he does go on to do a fair job of dismantling Pinker’s frighteningly dogmatic rejection of generalizable brain plasticity research:

One thing that didn’t surprise me was Pinker’s attempt to downplay the importance of neuroplasticity. While he acknowledges that our brains adapt to shifts in the environment, including (one infers) our use of media and other tools, he implies that we need not concern ourselves with the effects of those adaptations. Because all sorts of things influence the brain, he oddly argues, we don’t have to care about how any one thing influences the brain. Pinker, it’s important to point out, has an axe to grind here. The growing body of research on the adult brain’s remarkable ability to adapt, even at the cellular level, to changing circumstances and new experiences poses a challenge to Pinker’s faith in evolutionary psychology and behavioral genetics. The more adaptable the brain is, the less we’re merely playing out ancient patterns of behavior imposed on us by our genetic heritage.

Here is my response, posted on Nick’s blog:

Hi Nick,

As you know from our discussion at my blog, I’m not really a fan of the extreme views given by either you or Pinker. However, I applaud the thorough rebuttal you’ve given here to Stephen’s poorly researched response. As someone doing my PhD in neuroplasticity and cognitive technology, it absolutely infuriated me to see Stephen completely handwave away a decade of solid research showing generalizable cognitive gains from various forms of media-practice. To simply ignore findings from, for example the Bavalier lab, that demonstrate reliable and highly generalizable cognitive and visual gains and plasticity is to border on the unethically dogmatic.

Pinker isn’t well known for being flexible within cognitive science however; he’s probably the only person even more dogmatic about nativist modularism than Fodor. Unfortunately, Stephen enjoys a large public following and his work has really been embraced by the anti-religion ‘brights’ movement. While on some levels I appreciate this movement’s desire to promote rationality, I cringe at how great scholars like Dennett and Pinker seem totally unwilling to engage with the expanding body of research that casts a great deal of doubt on the 1980’s era cogsci they built their careers on.

So I give you kudos there. I close as usual, by saying that you’re presenting a ‘sexy’ and somewhat sensationalistic account that while sure to sell books and generate controversy, is probably based more in moral panic than sound theory. I have no doubt that the evidence you’ve marshaled demonstrates the cognitive potency of new media. Further, I’m sure you are aware of the heavy-media multitasking paper demonstrating a drop in executive functioning in HMMs.

However, you neglect in the posts I’ve seen to emphasize what those authors clearly did: that these findings are not likely to represent a true loss of function but rather are indicators of a shift in cognitive style. Your unwillingness to declare the normative element in your thesis regarding ‘deep thought’ is almost as chilling as Pinker’s total refusal to acknowledge the growing body of plasticity research. Simply put, I think you are aware that you’ve conflated executive processing with ‘deep thinking’, and are not really making the case that we know to be true.

Media is a tool like any other. It’s outcome measures are completely dependent on how we use it and our individual differences. You could make this case quite well with your evidence, but you seem to embrace the moral panic surrounding your work. It’s obvious that certain patterns, including the ones probably driving your collected research, will play on our plasticity to create cognitive differences. Plasticity is limited however, and you really don’t play on the most common theme in mental training literature: balance and trade-off. Your failure to acknowledge the economical and often conservative nature of the brain forces me to lump your work in with the decade that preceded your book, in which it was proclaimed that violent video games and heavy metal music would rot our collective minds. These things didn’t happen, except in those who where already at high risk, and furthermore they produced unanticipated cognitive gains. I think if you want to be on the ‘not wrong’ side of history, you may want to introduce a little flexibility to your argument. I guess if it makes you feel better, for many in the next generation of cognition researchers, it’s already too late for a dogmatic thinker like Pinker.

Final thoughts?

A defense of vegetarian fMRI (1/2)

Recently there’s been much ado about a newly published fMRI study of empathetic responding in vegetarians, vegans, and omnivores. The study isn’t perfect, which the authors admit, but I find it interesting and relatively informative for an fMRI paper. The Neurocritic doesn’t, rather he raises some seemingly serious issues with the study. I promised on twitter I’d defend my claim that the study is good (and that neurocritic could do better). But first, a motivated ramble to distract and confuse you.

As many of you might realize, neuroscience could be said to be going through something like puberty. While the public remains infatuated with every poorly worded research report, researchers within the neurosciences have to view brain mapping through an increasingly skeptical lens. This is a good thing: science progresses through the introduction and use of new technologies and the eventual skeptical refinement of their products.

And certainly there is plenty of examples shoddy neuroscience out there, whether it’s reports of voodoo correlations or inconsistencies between standard fMRI analyses packages. Properly executed, attention to these issues and a healthy skepticism of the methods will ultimately result in a refined science. Yet we must also be careful to apply the balm of skepticism in a refined manner: neuroscientists are people to, and we work in an increasingly competitive field where there are few well-defined standards and even less clarity.

Take an example from my lab that happened just today.  We’re currently analyzing some results from a social cognition experiment my colleague Kristian Tylen and I conducted last year. Like many fMRI results, our hypotheses (which were admitable a bit vague when we made them) were not exactly supported by our findings. Rather we ended up with a scattered series of blobs that appeared to mostly center on early visual areas. This is obviously boring and unpublishable, and after some time we decided to do a small volume correction on some areas we’d discussed in a published paper. This finally revealed some interesting findings somewhere around the TPJ, which brings me to the point of this story.

My research has thus far mostly focused on motor and prefrontal regions. We in neuroimaging can often fall victim to what I call ‘blob blind sight’ where we focus so greatly on a single area or handful of areas that we forget there’s’ a wide world of cortex out there. Imagine my surprise when I tried to get clear about whether our finding was situated in exactly the pSTS, TPJ, or nearby inferior parietal lobule (IPL) only to discover that these three areas are nearly indistinguishable from one another anatomically.

All of these regions are involved in different aspects of social cognition, and across the literature there are no clear anatomical differentiation between them. In many cases, researchers will just lump them together as pSTS/TPJ, regardless of the fact that a great deal of research has gone on explicitly differentiating them. Now what does one do with a blob that lands somewhere in the middle, overlapping all three? More specifically, imagine the case where your activation foci lands smack dab in the middle, or a few voxels to the left. Is it TPJ? Or IPL? Or is it really the conjunction of all three, and if so, how does one make sense of that given the wide array of functions and connectivity patterns for these areas. IPL is a part of the default mode, whereas TPJ and pSTS are not. It’s really quite a mess, and the answer you choose will likely depend upon the interpretation you give, given the vast variety of functions allocated to these three regions.

The point of all this, which begins to lead to my critique of TNC critique, is that it is not a simple matter of putting ones foot down and claiming that the lack of an expected activation or the presence of an unexpected one is damning or indicative of bad science. It’s an inherent problem in a field where hundreds of papers are published monthly with massive tables of activation foci. To say that a study has gone awry because they don’t report your favorite area misses the point. What’s more important is to evaluate the methods and explain the totality of the findings reported.

So that’s one huge issue confronting most researchers. Although there are some open source ‘foci databases’ out there, they are underused and hard to rely on. One can of course try to pinpoint the exact area, but in reality the chance that you’ll have such a focused blob is pretty unlikely. Rather, researchers have to rely on extra-scanner measures and common sense to make any kind of interesting theoretical inferences from fMRI. This post was meant to be a response to The Neurocritic, who took issue with my taking issue of his taking issue with a certain vegetarian fmri study… but I’m already an hour late coming home from work and I’m afraid I’ve failed to deliver. I did take the time this afternoon to go thoroughly through both the paper and TNC’s response however, and I think I’ve got a pretty compelling argument. Next time: why the neurocritic is plain wrong ;)

Slides for my Zombies or Cyborgs Talk



My MA Thesis: The Body in Action: Intention, Action-Consciousness, & Compulsion

Presenting, my masters thesis. Hope someone out there enjoys it.

Synaptic Adaptation to Environmental Alteration

From Quartz & Sejnowski: Neural Basis of Cognitive Development (1997)

Quartz and Sejnowski. The neural basis of cognitive development: a constructivist manifesto. Behav Brain Sci (1997) vol. 20 (4) pp. 537-56; discussion 556-96

Above you see an excellent summary table found in a seminal work by Quartz and Sejnowski. I’m reading this paper now, and aside from the die-hard representationalist instincts of the authors, it is an excellent overview of the development of neuroplasticity research and the relation of various forms of plasticity to learning and cognition. I find the above table fascinating simply because it demonstrates in one tidy arena the scope and temporal shape of brain development. You see for example, infamous studies in which the eyes of rats are sutured shut at birth alongside equally high-impact studies in which alterations in environmental complexity alter synaptic densities.

Overall, this is a list of studies in which the alteration of sensory motor input alters synaptic density and complexity in a dynamical fashion. I find it particularity interesting that the overall direction appears to be on in which increased complexity equals increased density. One stand out result is Valverde (1971) where an 20 day period of darkness is synaptically overcome when the mice are returned to a normal environment. Overall this table is a historically stunning account of the resilience of neural systems.

One big question though- why has it taken so long for plasticity to make its way into neurological acceptance?? Clearly the data was there… guess we needed fancy magnets to believe in it!

Follow

Get every new post delivered to your Inbox.

Join 13,001 other followers

%d bloggers like this: