Neuroconscience

The latest thoughts, musings, and data in cognitive science and neuroscience.

Tag: cognitive science

MOOC on non-linear approaches to social and cognitive sciences. Votes needed!

My colleagues at Aarhus University have put together a fascinating proposal for a Massive Online Open Course (MOOC) on “Analyzing Behavioral Dynamics: non-linear approaches to social and cognitive sciences”. I’ve worked with Riccardo and Kristian since my masters and I can promise you the course will be excellent. They’ve spent the past 5 years exhaustively pursuing methodology in non-linear dynamics, graph theoretical, and semantic/semiotic analyses and I think will have a lot of interesting practical insights to offer. Best of all the course is free to all, as long as it gets enough votes on the MPF website. I’ve been a bit on the fence regarding my feelings about MOOCs, but in this case I think it’s really a great opportunity to give novel methodologies more exposure. Check it out- if you like it, give them a vote and consider joining the course!

https://moocfellowship.org/submissions/analyzing-behavioral-dynamics-non-linear-approaches-to-social-and-cognitive-sciences

Course Description

In the last decades, the social sciences have come to confront the temporal nature of human behavior and cognition: How do changes of heartbeat underlie emotions? How do we regulate our voices in a conversation? How do groups develop coordinative strategies to solve complex problems together?
This course enables you to tackle this sort of questions: addresses methods of analysis from nonlinear dynamics and complexity theory, which are designed to find and characterize patterns in this kind of complicated data. Traditionally developed in fields like physics and biology, non-linear methods are often neglected in social and cognitive sciences.

The course consists of two parts:

  1. The dynamics of behavior and cognition
    In this part of the course you are introduced some examples of human behavior that challenge the assumptions of linear statistics: reading time, voice dynamics in clinical populations, etc. You are then shown step-by-step how to characterize and quantify patterns and temporal dynamics in these behaviors using non-linear methods, such as recurrence quantification analysis.
  2. The dynamics of interpersonal coordination
    In this second part of the course we focus on interpersonal coordination: how do people manage to coordinate action, emotion and cognition? We consider several real-world cases: heart beat synchronization during firewalking rituals, voice adaptation during conversations, joint problem solving in creative tasks – such as building Lego models together. You are then shown ways to analyze how two or more behaviors are coordinated and how to characterize their coupling – or lack-thereof.

This course provides a theoretical and practical introduction to non-linear techniques for social and cognitive sciences. It presents concrete case studies from actual research projects on human behavior and cognition. It encourages you to put all this to practice via practical exercises and quizzes. By the end of this course you will be fully equipped to go out and do your own research projects applying non-linear methods on human behavior and coordination.

Learning objectives

  • Given a timeseries (e.g. a speech recording, or a sequence of reaction times), characterize its patterns: does it contain repetitions? How stable? How complex?
  • Given a timeseries (e.g. a speech recording, or a sequence of reaction times), characterize how it changes over time.
  • Given two timeseries (e.g. the movements of two dancers) characterize their coupling: how do they coordinate? Do they become more similar over time? Can you identify who is leading and who is following?

MOOC relevance

Social and cognitive research is increasingly investigating phenomena that are temporally unfolding and non-linear. However, most educational institutions only offer courses in linear statistics for social scientists. Hence, there is a need for an easy to understand introduction to non-linear analytical tools in a way that is specifically aimed at social and cognitive sciences. The combination of actual cases and concrete tools to analyze them will give the course a wide appeal.
Additionally, methods oriented courses on MOOC platforms such as Coursera have generally proved very attractive for students.

Please spread the word about this interesting course!

Quick post – Dan Dennett’s Brain talk on Free Will vs Moral Responsibility

As a few people have asked me to give some impression of Dan’s talk at the FIL Brain meeting today, i’m just going to jot my quickest impressions before I run off to the pub to celebrate finishing my dissertation today. Please excuse any typos as what follows is unedited! Dan gave a talk very similar to his previous one several months ago at the UCL philosophy department. As always Dan gave a lively talk with lots of funny moments and appeals to common sense. Here the focus was more on the media activities of neuroscientists, with some particularly funny finger wagging at Patrick Haggard and Chris Frith. Some good bits where his discussion of evidence that priming subjects against free will seems to make them more likely to commit immoral acts (cheating, stealing) and a very firm statement that neuroscience is being irresponsible complete with bombastic anti-free will quotes by the usual suspects. Although I am a bit rusty on the mechanics of the free will debate, Dennett essentially argued for a compatiblist  view of free will and determinism. The argument goes something like this: the basic idea that free will is incompatible with determinism comes from a mythology that says in order to have free will, an agent must be wholly unpredictable. Dennett argues that this is absurd, we only need to be somewhat unpredictable. Rather than being perfectly random free agents, Dennett argues that what really matters is moral responsibility pragmatically construed.  Dennett lists a “spec sheet” for constructing a morally responsible agent including “could have done otherwise, is somewhat unpredictable, acts for reasons, is subject to punishment…”. In essence Dan seems to be claiming that neuroscientists don’t really care about “free will”, rather we care about the pragmatic limits within which we feel comfortable entering into legal agreements with an agent. Thus the job of the neuroscientists is not to try to reconcile the folk and scientific views of “free will”, which isn’t interesting (on Dennett’s acocunt) anyway, but rather to describe the conditions under which an agent can be considered morally responsible. The take home message seemed to be that moral responsibility is essentially a political rather than metaphysical construct. I’m afraid I can’t go into terrible detail about the supporting arguments- to be honest Dan’s talk was extremely short on argumentation. The version he gave to the philosophy department was much heavier on technical argumentation, particularly centered around proving that compatibilism doesn’t contradict with “it could have been otherwise”. In all the talk was very pragmatic, and I do agree with the conclusions to some degree- that we ought to be more concerned with the conditions and function of “will” and not argue so much about the meta-physics of “free”. Still my inner philosopher felt that Dan is embracing some kind of basic logical contradiction and hand-waving it away with funny intuition pumps, which for me are typically unsatisfying.

For reference, here is the abstract of the talk:

Nothing—yet—in neuroscience shows we don’t have free will

Contrary to the recent chorus of neuroscientists and psychologists declaring that free will is an illusion, I’ll be arguing (not for the first time, but with some new arguments and considerations) that this familiar claim is so far from having been demonstrated by neuroscience that those who advance it are professionally negligent, especially given the substantial social consequences of their being believed by lay people. None of the Libet-inspired work has the drastic implications typically adduced, and in fact the Soon et al (2008) work, and its descendants, can be seen to demonstrate an evolved adaptation to enhance our free will, not threaten it. Neuroscientists are not asking the right questions about free will—or what we might better call moral competence—and once they start asking and answering the right questions we may discover that the standard presumption that all “normal” adults are roughly equal in moral competence and hence in accountability is in for some serious erosion. It is this discoverable difference between superficially similar human beings that may oblige us to make major revisions in our laws and customs. Do we human beings have free will? Some of us do, but we must be careful about imposing the obligations of our good fortune on our fellow citizens wholesale.

Enactive Bayesians? Response to “the brain as an enactive system” by Gallagher et al

Shaun Gallagher has a short new piece out with Hutto, Slaby, and Cole and I felt compelled to comment on it. Shaun was my first mentor and is to thank for my understanding of what is at stake in a phenomenological cognitive science. I jumped on this piece when it came out because, as I’ve said before, enactivists often  leave a lot to be desired when talking about the brain. That is to say, they more often than not leave it out entirely and focus instead on bodies, cultural practices, and other parts of our extra-neural milieu. As a neuroscientist who is enthusiastically sympathetic to the embodied, enactive approach to cognition, I find this worrisome. Which is to say that when I’ve tried to conduct “neurophenomenological” experiments, I often feel a bit left in the rain when it comes time construct, analyze, and interpret the data.

As an “enactive” neuroscientist, I often find the de-emphasis of brains a bit troubling. For one thing, the radically phenomenological crew tends to make a lot of claims to altering the foundations of neuroscience. Things like information processing and mental representation are said to be stale, Cartesian constructs that lack ontological validity and want to be replaced. This is fine- I’m totally open to the limitations of our current explanatory framework. However as I’ve argued here, I believe neuroscience still has great need of these tools and that dynamical systems theory is not ready for prime time neuroscience. We need a strong positive account of what we should replace them with, and that account needs to act as a practical and theoretical guide to discovery.

One worry I have is that enactivism quickly begins to look like a constructivist version of behaviorism, focusing exclusively on behavior to the exclusion of the brain. Of course I understand that this is a bit unfair; enactivism is about taking a dynamical, encultured, phenomenological view of the human being seriously. Yet I believe to accomplish this we must also understand the function of the nervous system. While enactivists will often give token credit to the brain- affirming that is indeed an ‘important part’ of the cognitive apparatus, they seem quick to value things like clothing and social status over gray matter. Call me old fashioned but, you could strip me of job, titles, and clothing tomorrow and I’d still be capable of 80% of whatever I was before. Granted my cognitive system would undergo a good deal of strain, but I’d still be fully capable of vision, memory, speech, and even consciousness. The same can’t be said of me if you start magnetically stimulating my brain in interesting and devious ways.

I don’t want to get derailed arguing about the explanatory locus of cognition, as I think one’s stances on the matter largely comes down to whatever your intuitive pump tells you is important.  We could argue about it all day; what matters more than where in the explanatory hierarchy we place the brain, is how that framework lets us predict and explain neural function and behavior. This is where I think enactivism often fails; it’s all fire and bluster (and rightfully so!) when it comes to the philosophical weaknesses of empirical cognitive science, yet mumbles and missteps when it comes to giving positive advice to scientists. I’m all for throwing out the dogma and getting phenomenological, but only if there’s something useful ready to replace the methodological bathwater.

Gallagher et al’s piece starts:

 “… we see an unresolved tension in their account. Specifically, their questions about how the brain functions during interaction continue to reflect the conservative nature of ‘normal science’ (in the Kuhnian sense), invoking classical computational models, representationalism, localization of function, etc.”

This is quite true and an important tension throughout much of the empirical work done under the heading of enactivism. In my own group we’ve struggled to go from the inspiring war cries of anti-representationalism and interaction theory to the hard constraints of neuroscience. It often happens that while the story or theoretical grounding is suitably phenomenological and enactive, the methodology and their interpretation are necessarily cognitivist in nature.

Yet I think this difficulty points to the more difficult task ahead if enactivism is to succeed. Science is fundamentally about methodology, and methodology reflects and is constrained by one’s ontological/explanatory framework. We measure reaction times and neural signal lags precisely because we buy into a cognitivist framework of cognition, which essentially argues for computations that take longer to process with increasing complexity, recruiting greater neural resources. The catch is, without these things it’s not at all clear how we are to construct, analyze, and interpret our data.  As Gallagher et al correctly point out, when you set out to explain behavior with these tools (reaction times and brain scanners), you can’t really claim to be doing some kind of radical enactivism:

 “Yet, in proposing an enactive interpretation of the MNS Schilbach et al. point beyond this orthodox framework to the possibility of rethinking, not just the neural correlates of social cognition, but the very notion of neural correlate, and how the brain itself works.”

We’re all in agreement there: I want nothing more than to understand exactly how it is our cerebral organ accomplishes the impressive feats of locomotion, perception, homeostasis, and so on right up to consciousness and social cognition. Yet I’m a scientist and no matter what I write in my introduction I must measure something- and what I measure largely defines my explanatory scope. So what do Gallagher et al offer me?

 “The enactive interpretation is not simply a reinterpretation of what happens extra-neurally, out in the intersubjective world of action where we anticipate and respond to social affordances. More than this, it suggests a different way of conceiving brain function, specifically in non-representational, integrative and dynamical terms (see e.g., Hutto and Myin, in press).”

Ok, so I can’t talk about representations. Presumably we’ll call them “processes” or something like that. Whatever we call them, neurons are still doing something, and that something is important in producing behavior. Integrative- I’m not sure what that means, but I presume it means that whatever neurons do, they do it across sensory and cognitive modalities. Finally we come to dynamical- here is where it gets really tricky. Dynamical systems theory (DST) is an incredibly complex mathematical framework dealing with topology, fluid dynamics, and chaos theory. Can DST guide neuroscientific discovery?

This is a tough question. My own limited exposure to DST prevents me from making hard conclusions here. For now let’s set it aside- we’ll come back to this in a moment. First I want to get a better idea of how Gallagher et al characterize contemporary neuroscience, the source of this tension in Schillbach et al:

Functional MRI technology goes hand in hand with orthodox computational models. Standard use of fMRI provides an excellent tool to answer precisely the kinds of questions that can be asked within this approach. Yet at the limits of this science, a variety of studies challenge accepted views about anatomical and functional segregation (e.g., Shackman et al. 2011; Shuler and Bear 2006), the adequacy of short-term task- based fMRI experiments to provide an adequate conception of brain function (Gonzalez-Castillo et al. 2012), and individual differences in BOLD signal activation in subjects performing the same cognitive task (Miller et al. 2012). Such studies point to embodied phenomena (e.g., pain, emotion, hedonic aspects) that are not appropriately characterized in representational terms but are dynamically integrated with their central elaboration.

Claim one is what I’ve just argued above, that fMRI and similar tools presuppose computational cognitivism. What follows I feel is a mischaracterization of cognitive neuroscience. First we have the typical bit about functional segregation being extremely limited. It surely is and I think most neuroscientists today would agree that segregation is far from the whole story of the brain. Which is precisely why the field is undeniably and swiftly moving towards connectivity and functional integration, rather than segregation. I’d wager that for a few years now the majority of published cogneuro papers focus on connectivity rather than blobology.

Next we have a sort of critique of the use of focal cognitive tasks. This almost seems like a critique of science itself; while certainly not without limits, neuroscientists rely on such tasks in order to make controlled assessments of phenomena. There is nothing a priori that says a controlled experiment is necessarily cognitivist anymore so than a controlled physics experiment must necessarily be Newtonian rather than relativistic. And again, I’d characterize contemporary neuroscience as being positively in love with “task-free” resting state fMRI. So I’m not sure at what this criticism is aimed.

Finally there is this bit about individual differences in BOLD activation. This one I think is really a red herring; there is nothing in fMRI methodology that prevents scientists from assessing individual differences in neural function and architecture. The group I’m working with in London specializes in exactly this kind of analysis, which is essentially just creating regression models with neural and behavioral independent and dependent variables. There certainly is a lot of variability in brains, and neuroscience is working hard and making strides towards understanding those phenomena.

 “Consider also recent challenges to the idea that so-called “mentalizing” areas (“cortical midline structures”) are dedicated to any one function. Are such areas activated for mindreading (Frith and Frith 2008; Vogeley et al. 2001), or folk psychological narrative (Perner et al. 2006; Saxe & Kanwisher 2003); a default mode (e.g., Raichle et al. 2001), or other functions such as autobiographical memory, navigation, and future planning (see Buckner and Carroll 2006; 2007; Spreng, Mar and Kim 2008); or self -related tasks(Northoff & Bermpohl 2004); or, more general reflective problem solving (Legrand andRuby 2010); or are they trained up for joint attention in social interaction, as Schilbach etal. suggest; or all of the above and others yet to be discovered.

I guess this paragraph is supposed to get us thinking that these seem really different, so clearly the localizationist account of the MPFC fails. But as I’ve just said, this is for one a bit of a red herring- most neuroscientists no longer believe exclusively in a localizationist account. In fact more and more I hear top neuroscientists disparaging overly blobological accounts and referring to prefrontal cortex as a whole. Functional integration is here to stay. Further, I’m not sure I buy their argument that these functions are so disparate- it seems clear to me that they all share a social, self-related core probably related to the default mode network.

Finally, Gallagher and company set out to define what we should be explaining- behavior as “a dynamic relation between organisms, which include brains, but also their own structural features that enable specific perception-action loops involving social and physical environments, which in turn effect statistical regularities that shape the structure of the nervous system.” So we do want to explain brains, but we want to understand that their setting configures both neural structure and function. Fair enough, I think you would be hard pressed to find a neuroscientist who doesn’t agree that factors like environment and physiology shape the brain. [edit: thanks to Bryan Patton for pointing out in the comments that Gallagher's description of behavior here is strikingly similar to accounts given by Friston's Free Energy Principle predictive coding account of biological organisms]

Gallagher asks then, “what do brains do in the complex and dynamic mix of interactions that involve full-out moving bodies, with eyes and faces and hands and voices; bodies that are gendered and raced, and dressed to attract, or to work or play…?” I am glad to see that my former mentor and I agree at least on the question at stake, which seems to be, what exactly is it brains do? And we’re lucky in that we’re given an answer by Gallagher et al:

“The answer is that brains are part of a system, along with eyes and face and hands and voice, and so on, that enactively anticipates and responds to its environment.”

 Me reading this bit: “yep, ok, brains, eyeballs, face, hands, all the good bits. Wait- what?” The answer is “… a system that … anticipates and responds to its environment.” Did Karl Friston just enter the room? Because it seems to me like Gallagher et al are advocating a predictive coding account of the brain [note: see clarifying comment by Gallagher, and my response below]! If brains anticipate their environment then that means they are constructing a forward model of their inputs. A forward model is a Bayesian statistical model that estimates posterior probabilities of a stimulus from prior predictions about its nature. We could argue all day about what to call that model, but clearly what we’ve got here is a brain using strong internal models to make predictions about the world. Now what is “enactive” about these forward models seems like an extremely ambiguous notion.

To this extent, Gallagher includes “How an agent responds will depend to some degree on the overall dynamical state of the brain and the various, specific and relevant neuronal processes that have been attuned by evolutionary pressures, but also by personal experiences” as a description of how a prediction can be enactive. But none of this is precluded by the predictive coding account of the brain. The overall dynamical state (intrinsic connectivity?) of the brain amounts to noise that must be controlled through increasing neural gain and precision. I.e., a Bayesian model presupposes that the brain is undergoing exactly these kinds of fluctuations and makes steps to produce optimal behavior in the face of such noise.

Likewise the Bayesian model is fully hierarchical- at all levels of the system the local neural function is constrained and configured by predictions and error signals from the levels above and below it. In this sense, global dynamical phenomena like neuromodulation structure prediction in ways that constrain local dynamics.  These relationships can be fully non-linear and dynamical in nature (See Friston 2009 for review). Of the other bits –  evolution and individual differences, Karl would surely say that the former leads to variation in first priors and the latter is the product of agents optimizing their behavior in a variable world.

So there you have it- enactivist cognitive neuroscience is essentially Bayesian neuroscience. If I want to fulfill Gallagher et al’s prescriptions, I need merely use resting state, connectivity, and predictive coding analysis schemes. Yet somehow I think this isn’t quite what they meant- and there for me, lies the true tension in ‘enactive’ cognitive neuroscience. But maybe it is- Andy Clark recently went Bayesian, claiming that extended cognition and predictive coding are totally compatible. Maybe it’s time to put away the knives and stop arguing about representations. Yet I think an important tension remains: can we explain all the things Gallagher et al list as important using prior and posterior probabilities? I’m not totally sure, but I do know one thing- these concepts make it a hell of a lot easier to actually analyze and interpret my data.

fake edit:

I said I’d discuss DST, but ran out of space and time. My problem with DST boils down to this: it’s descriptive, not predictive. As a scientist it is not clear to me how one actually applies DST to a given experiment. I don’t see any kind of functional ontology emerging by which to apply the myriad of DST measures in a principled way. Mental chronometry may be hokey and old fashioned, but it’s easy to understand and can be applied to data and interpreted readily. This is a huge limitation for a field as complex as neuroscience, and as rife with bad data. A leading dynamicist once told me that in his entire career “not one prediction he’d made about (a DST measure/experiment) had come true, and that to apply DST one just needed to “collect tons of data and then apply every measure possible until one seemed interesting”. To me this is a data fishing nightmare and does not represent a reliable guide to empirical discovery.

What are the critical assumptions of neuroscience?

In light of all the celebration surrounding the discovery of a Higgs-like particle, I found it amusing that nearly 30 years ago Higg’s theory was rejected by CERN as ‘outlandish’. This got me to wondering, just how often is scientific consensus a bar to discovery? Scientists are only human, and as such can be just as prone to blindspots, biases, and herding behaviors as other humans. Clearly the scientific method and scientific consensus (e.g. peer review) are the tools we rely on to surmount these biases. Yet, every tool has it’s misuse, and sometimes the wisdom of the crowd is just the aggregate of all these biases.

At this point, David Zhou pointed out that when scientific consensus leads to rejection of correct viewpoints, it’s often due to the strong implicit assumptions that the dominant paradigm rests upon. Sometimes there are assumptions that support our theories which, due to a lack of either conceptual or methodological sophistication, are not amenable to investigation. Other times we simply don’t see them; when Chomsky famously wrote his review of Skinner’s verbal behavior, he simply put together all the pieces of the puzzle that were floating around, and in doing so destroyed a 20-year scientific consensus.

Of course, as a cognitive scientist studying the brain, I often puzzle over what assumptions I critically depend upon to do my work. In an earlier stage of my training, I was heavily inundated with ideas from the “embodied, enactive, extended” framework, where it is common to claim that the essential bias is an uncritical belief in the representational theory of mind. While I do still see problems in mainstream information theory, I’m no longer convinced that an essentially internalist, predictive-coding account of the brain is without merit. It seems to me that the “revolution” of externalist viewpoints turned out to be more of an exercise in house-keeping, moving us beyond overly simplistic “just-so” evolutionary diatribes,and  empty connectionism, to introducing concepts from dynamical systems to information theory in the context of cognition.

So, really i’d like to open this up: what do you think are assumptions neuroscientists cannot live without? I don’t want to shape the discussion too much, but here are a few starters off the top of my head:

  • Nativism: informational constraints are heritable and innate, learning occurs within these bounds
  • Representation: Physical information is transduced by the senses into abstract representations for cognition to manipulate
  • Frequentism: While many alternatives currently abound, for the most part I think many mainstream neuroscientists are crucially dependent on assessing differences in mean and slope. A related issue is a tendency to view variability as “noise”
  • Mental Chronometry: related to the representational theory of mind is the idea that more complex representations take longer to process and require more resources. Thus greater (BOLD/ERP/RT) equals a more complex process.
  • Evolution: for a function to exist it must be selected for by evolutionary natural selection

That’s all off the top of my head. What do you think? Are these essential for neuroscience? What might a cognitive theory look like without these, and how could it motivate empirical research? For me, each of these are in some way quite helpful in terms of providing a framework to interpret reaction-time, BOLD, or other cognition related data. Have I missed any?

Top tips for new experimenters

I set out to write my top five tips for new experimenters today and found there were really only two universal suggestions I felt I could and should make:

  1. Simplify your design. There is no complex question that can’t be better asked with a simple one. Simple design means stronger statistics, a clearer interpretation, and less variables to control. If you cannot phrase your core question in a sentence, you need to drastically reduce the scope of your experiment.
  2. Know your design. Before collecting the data, you should know exactly what kind of data it will be, how many variables, and what kind of statistical test you will use to analyze it. Then you need to collect 4-5 “throw-away” participants and run them through this analysis. This ensures that the data can be readily analyzed in a rigorous way, from start to finish. You will know you are ready when your looking at the successful results of a pilot, which will discover study-killing bugs (of which there are MANY)
Those are honestly the two most important guidelines I can think of! Everything else is secondary to achieving those goals. If you pull those off, you’ll have beaten 80% of the crap that can destroy your data. In my experience the biggest mistake most people make when starting out is telling themselves that simple questions are not worth their time. You’ll build a more stable career by doing something less innovative but more solid, and knowing it more thoroughly. A lot of people don’t do this and end up with total shitpiles of worthless data- myself included. Don’t let bad data happen to you- simplify and know your design! I’d love to hear about your number 1 tips in the comments!
Edit: Tip .3 comes from a great comment by Neuroskeptic:

Good post. I would add a #3, it’s kind of an aspect of #2 although important enough to stand alone -

Make sure you are one of the pilot subjects. It’s amazing what kind of things you notice when you’re actually in the scanner that you never otherwise would – anything from the fact that the stimuli aren’t very visible, to the fact that the sequence you’re using makes the bed shake, to the fact that the task is just so long & boring that you fall asleep by the end (which is so much easier in the scanner than when you’re sitting up at a computer, which is when you probably piloted the task!)

If you’re not MRI safe, get a trusted fellow researcher to do it. But never assume that non-scientist volunteers will tell you these things because they don’t (I think because they don’t want to look stupid by questioning your authority.)

Neuroscientists: What’s the most interesting question right now?

After 20 years of cognitive neuroscience, I sometimes feel frustrated by how little progress we’ve made. We still struggle with basic issues, like how to ask a subject if he’s in pain, or what exactly our multi-million dollar scanners measure. We lack a unifying theory linking information, psychological function, and neuroscientific measurement. We still publish all kinds of voodoo correlations, uncorrected p-values, and poorly operationalized blobfests. Yet, we’ve also laid some of the most important foundational research of our time. In twenty years we’ve mapped a mind boggling array of cognitive function. Some of these attempts at localization may not hold; others may be built on shaky functional definitions or downright poor methods. Even in the face of this uncertainty, the shear number and variety of functions that have been mapped is inspiring. Further, we’ve developed analytic tools to pave the way for an exciting new decade of multi-modal and connectomic research. Developments like resting-state fMRI, optogenetics, real time fMRI, and multi-modal imaging, make for a very exciting time to be a Cognitive Neuroscientist!

Online, things can seem a bit more pessimistic. Snarky methods blogs dedicated to revealing the worst in field tend to do well, and nearly any social-media savy neurogeek will lament the depressing state of science journalism and the brain. While I am also tired of incessantly phrenological, blob-obsessed reports (“research finds god spot in the brain, are your children safe??”) I think we share some of the blame for not communicating properly about what interests and challenges us. For me, some of the most exciting areas of research are those concerning getting straight about what our measurements mean- see the debates over noise in resting state or the neural underpinnings of the BOLD signal for example. Yet these issues are often reported as dry methodological reports, the writers themselves seemingly bored with the topic.

We need to do a better job illustrating to people just how complex and infantile our field is. The big, sexy issues are methodological in nature. They’re also phenomenological in nature. Right now neuroscience is struggling to define itself, unsure if we should be asking our subjects how they feel or anesthetizing them. I believe that if we can illustrate just how tenuous much of our research is, including the really nagging problems, the public will better appreciate seemingly nuanced issues like rest-stimulus interaction and noise-regression.

With that in mind- what are your most exciting questions, right now? What nagging thorn ails you at all steps in your research?

For me, the most interesting and nagging question is, what do people do when we ask them to do nothing? I’m talking about rest-stimulus interaction and mind wandering. There seem to be two prevailing (pro-resting state) views: that default mode network-related activity is related to subjective mind-wandering, and/or that it’s a form of global, integrative, stimulus independent neural variability. On the first view, variability in participants ability to remain on-task drive slow alterations in behavior and stimulus-evoked brain activity. On the other, innate and spontaneous rhythms synchronize large brain networks in ways that alter stimulus processing and enable memory formation. Either way, we’re left with the idea that a large portion of our supposedly well-controlled, stimulus-related brain activity is in fact predicted by uncontrolled intrinsic brain activity. Perhaps even defined by it! When you consider that all this is contingent on the intrinsic activity being real brain activity and not some kind of vascular or astrocyte-driven artifact, every research paradigm becomes a question of rest-stimulus interaction!

So neuroscientists, what keeps you up at night?

A brave new default mode in meditation practitioners- or just confused controls? Review of Brewer (2011)

Given that my own work focuses on cognitive control, intrinsic connectivity, and mental-training (e.g. meditation) I was pretty excited to see Brewer et al’s paper on just these topics appear in PNAS just in time for the winter holidays. I meant to review it straight away but have been buried under my own data analysis until recently. Sadly, when I finally got around to delving into it, my overall reaction was lukewarm at best. Without further ado, my review of:

“Meditation experience is associated with differences in default mode network activity and connectivity

Abstract:

“Many philosophical and contemplative traditions teach that “living in the moment” increases happiness. However, the default mode of humans appears to be that of mind-wandering, which correlates with unhappiness, and with activation in a network of brain areas associated with self-referential processing. We investigated brain activity in experienced meditators and matched meditation-naive controls as they performed several different meditations (Concentration, Loving-Kindness, Choiceless Awareness). We found that the main nodes of the default mode network(medial prefrontal and posterior cingulate cortices) were relatively deactivated in experienced meditators across all meditation types. Furthermore, functional connectivity analysis revealed stronger coupling in experienced meditators between the posterior cingulate, dorsal anterior cingulate, and dorsolateral prefrontal cortices (regions previously implicated in self- monitoring and cognitive control), both at baseline and during meditation. Our findings demonstrate differences in the default-mode network that are consistent with decreased mind-wandering. As such, these provide a unique understanding of possible neural mechanisms of meditation.”

Summary:

Aims: 9/10

Methods: 5/10

Interpretation: 7/10

Importance/Generalizability: 4/10

Overall: 6.25/10

The good: simple, clear cut design, low amount of voodoo, relatively sensible findings

The bad: lack of behavioral co-variates to explain neural data, yet another cross-sectional design

The ugly: prominent reporting of uncorrected findings, comparison of meditation-naive controls to practitioners using meditation instructions (failure to control task demands).

Take-home: Some interesting conclusions, from a somewhat tired and inconclusive design. Poor construction of baseline condition leads to a shot-gun spattering of brain regions with a few that seem interesting given prior work. Let’s move beyond poorly controlled cross-sections and start unravelling the core mechanisms (if any) involved in mindfulness.

Extended Review:
Although this paper used typical GLM and functional connectivity analyses, it loses points in several areas. First, although the authors repeatedly suggest that their “relative paucity of findings” may be “driven by the sensitivity of GLM analysis to fluctuations at baseline… and since meditation practitioners may be (meditating) at baseline…” the contrast would be weak. However, I will side with Jensen et al (2011) here in saying: Meditation naive controls receiving less than 5 minutes of instruction in “focused attention, loving-kindness and choiceless awareness” are simply no controls at all. The argument that the inability of the GLM to detect differences that are quite obviously confounded by a lack of an appropriately controlled baseline is galling at best. This is why we use a GLM-approach; it’s senseless to make conclusions about brain activity when your baseline is no baseline at all. Telling meditation-naive controls to utilize esoteric cultural practices of which they have only just been introduced too, and then comparing that to highly experienced practitioners is a perfect storm of cognitive confusion and poorly controlled demand characteristic. Further, I am disappointed in the review process that allowed the following statement “We found a similar pattern in the medial prefrontal cortex (mPFC), another primary node of the DMN, although it did not survive whole-brain correction for signifigance” followed by this image:

image

These results are then referred to repeatedly in the following discussion. I’m sorry, but when did uncorrected findings suddenly become interpretable? I blame the reviewers here over the authors- they should have known better. The MPFC did not survive correction and hence should not be included in anything other than a explicitly stated as such “exploratory analysis”. In fact it’s totally unclear from the methods section of this paper how these findings where at all discovered: did the authors first examine the uncorrected maps and then re-analyze them using the FWE correction? Or did they reduce their threshold in an exploratory post-hoc fashion? These things make a difference and I’m appalled that the reviewers let the article go to print as it is, when figure 1 and the discussion clearly give the non-fMRI savy reader the impression that a main finding of this study is MPFC activation during meditation. Can we please all agree to stop reporting uncorrected p-values?

I will give the authors this much; the descriptions of practice, and the theoretical guideposts are all quite coherent and well put-together. I found their discussion of possible mechanisms of DMN alteration in meditation to be intriguing, even if I do not agree with their conclusion. Still, it pains me to see a paper with so much potential fail to address the pitfalls in meditation research that should now be well known. Indeed the authors themselves make much ado about how difficult proper controls are, yet seem somehow oblivious to the poorly controlled design they here report. This leads me to my own reinterpretation of their data.

A new default mode, or confused controls?

Brewer et al (2011) report that, when using a verbally guided meditation instruction with meditation naive-controls and experienced practitioners, greater activations in PCC, temporal regions, and for loving-kindness, amygdala are found. Given strong evidence by colleagues Christian Jensen et al (2011) that these kinds of contrasts better represent differences in attentional effort than any mechanism inherent to meditation, I can’t help but wonder if what were seeing here is simply some controls trying to follow esoteric instructions and getting confused in the process. Consider the instruction for the choiceless awareness condition:

“Please pay attention to whatever comes into your awareness, whether it is a thought, emotion, or body sensation. Just follow it until something else comes into your awareness, not trying to hold onto it or change it in any way. When something else comes into your awareness, just pay attention to it until the next thing comes along”

Given that in most contemplative traditions, choiceless awareness techniques are typically late-level advanced practices, in which the very concept of grasping to a stimulus is distinctly altered and laden with an often spiritual meaning, it seems obvious to me that such an instruction constitutes and excellent mindwandering inducement for naive-controls. Do you meditate? I do a little, and yet I find these instructions extremely difficult to follow without essentially sending my mind in a thousand directions. Am I doing this correctly?  When should I shift? Is this a thought or am I just feeling hungry? These things constitute mind-wandering but for the controls, I would argue they constitute following the instructions. The point is that you simply can’t make meaningful conclusions about the neural mechanisms involved in mindfulness from these kinds of instructions.

Finally, let’s examine the functional-connectivity analysis. To be honest, there isn’t a whole lot to report here; the functional connectivity during meditation is perhaps confounded by the same issues I list above, which seems to me a probable cause for the diverse spread of regions reported between controls and meditators. I did find this bit to be interesting:

“Using the mPFC as the seed region, we found increased connectivity with the fusiform gyrus, inferior temporal and parahippocampal gyri, and left posterior insula (among other regions) in meditators relative to controls during meditation (Fig. 3, Fig. S1H, and Table S3). A subset of those regions showed the same relatively increased connectivity in meditators during the baseline period as well (Fig. S1G and Table1)

I found it interesting that the meditation conditions appear to co-activate MPFC and insula, and would love to see this finding replicated in properly controlled design. I also have a nagging wonder as to why the authors didn’t bother to conduct a second-level covariance analysis of their findings with the self-reported mind-wandering scores. If these findings accurately reflect meditation-induced alterations in the DMN, or as the authors more brazenly suggest “a entirely new default network”, wouldn’t we expect their PCC modulations to be predicted by individual variability in mind-wandering self-reports? Of course, we could open the whole can of worms that is “what does it mean when you ask participants if they ‘experienced mind wandering” but I’ll leave that for a future review. At least the authors throw a bone to neurophenomenology, suggesting in the discussion that future work utilize first-person methodology. Indeed.

Last, it occurs to me that the primary finding, of increased DLPFC and ACC in meditation>Controls, also fits well with my intepretation that this design is confounded by demand characteristics. If you take a naive subject and put them in the scanner with these instructions, I’ve argued that their probably going to do something a whole lot like mind-wandering. On the other hand, an experienced practitioner has a whole lot of implicit pressure on them to live up to their tradition. They know what they are their for, and hence they know that they should be doing their thing with as much effort as possible. So what does the contrast meditation>naive really give us? It gives us mind-wandering in the naive group, and increased attentional effort in the practitioner group. We can’t conclude anything from this design regarding mechanisms intrinsic to mindfulness; I predict that if you constructed a similar setting with any kind of dedicated specialist, and gave instructions like “think about your profession, what it means to you, remember a time you did really well” you would see the exact same kind of results. You just can’t compare the uncomparable.

Disclaimer: as usual, I review in the name of science, and thank the authors whole-heartily for the great effort and attention to detail that goes into these projects.  Also it’s worth mentioning that my own research focuses on many of these exact issues in mental training research, and hence i’m probably a bit biased in what I view as important issues.

The 2011 Mind & Life Summer Research Institute: Are Monks Better at Introspection?

As I’m sitting in the JFK airport waiting for my flight to Iceland, I can’t help but let my mind wander over the curious events of this year’s summer research institute (SRI). The Mind & Life Institute, an organization dedicated to the integration and development of what they’ve dubbed “contemplative science”, holds the SRI each summer to bring together clinicians, neuroscientists, scholars, and contemplatives (mostly monks) in a format that is half conference and half meditation retreat. The summer research institute is always a ton of fun, and a great place to further one’s understanding of Buddhism & meditation while sharing valuable research insights.

I was lucky enough to receive a Varela award for my work in meta-cognition and mental training and so this was my second year attending. I chose to take a slightly different approach from my first visit, when I basically followed the program and did whatever the M&L thought was the best use of my time. This meant lots of meditation- more than two hours per day not including the whole-day, silent “mini-retreat”. While I still practiced this year, I felt less obliged to do the full program, and I’m glad I took this route as it provided me a somewhat more detached, almost outsider view of the spectacle that is the Mind & Life SRI.

When I say spectacle, it’s important to understand how unconventional of a conference setting the SRI really is. Each year dozens of ambitious neuroscientists and clinicians meet with dozens of Buddhist monks and western “mindfulness” instructors. The initial feeling is one of severe culture clash; ambitious young scholars who can hardly wait to mention their Ivy League affiliations meet with the austere and almost ascetic approach of traditional Buddhist philosophy. In some cases it almost feels like a race to “out-mindful” one another, as folks put on a great show of piety in order to fit into the mood of the event. It can be a bit jarring to oscillate between the supposed tranquility and selflessness of mindfulness with the unabashed ambition of these highly talented and motivated folk- at least until things settle down a bit.

Nonetheless, the overall atmosphere of the SRI is one of serenity and scholarship. It’s an extremely fun, stimulating event, rich with amazingly talented yoga and meditation instructors and attended by the top researchers within the field. What follows is thus not meant as any kind of attack on the overall mission of the M&L. Indeed, I’m more than grateful to the institute for carrying on at least some form of Francisco Varela’s vision for cognitive science, and of course for supporting my own meditation research. With that being said, we turn to the question at hand: are monks objectively better at introspection? The answer for nearly everyone at the SRI appear to be “yes”, regardless of the scarcity of data suggesting this to be the case.

Enactivism and Francisco Varela

Francisco Varela, founder of EnactivismBefore I can really get into this issue, I need to briefly review what exactly “enactivism” is and how it relates to the SRI. The Mind & Life institute was co-founded by Francisco Varela, a Chilean biologist and neuroscientist who is often credited with the birth and success of embodied and enactive cognitive science. Varela had a profound impact on scientists, philosophers, and cognitive scientists and is a central influence in my own theoretical work. Varela’s essential thesis was outlined in his book “The Embodied Mind”, in which Varela, Thompson, and Rosch, attempted to outline a new paradigm for the study of mind. In the book, Varela et al rely on examples from cross-cultural psychology, continental phenomenology, Buddhism, and cognitive science to argue that cognition (and mind) is essentially an embodied, enactive phenomenon. The book has since spawned a generation of researchers dedicated in some way to the idea that cognition is not essentially, or at least foundationally, computational and representational in form.

I don’t here intend to get into the roots of what enactivism is; for the present it suffices to say that enactivism as envisioned by Varela involved a dedication to the “middle way” in which idealism-objectivism duality is collapsed in favor of a dynamical non-representational account of cognition and the world. I very much favor this view and try to use it productively in my own research. Varela argued throughout his life that cognition was not essentially an internal, info-processing kind of phenomenon, but rather an emergent and intricately interwoven entity that arose from our history of structural coupling with the world.  He further argued that cognitive science needed to develop a first-person methodology if it was to fully explain the rich panorama of human consciousness.

A simpler way to put this is to say that Varela argued persuasively that minds are not computers “parachuted into an objective world” and that cognition is not about sucking up impoverished information for representation in a subjective format. While Varela invoked much of Buddhist ontology, including concepts of “emptiness” and “inter-relatedness”, to develop his account continental phenomenologists like Heidegger and Merleau-Ponty also heavily inspired his vision of 4th wave cognitive science.  At the SRI there is little mention of this; most scholars are unaware of the continental literature or that phenomenology is not equal to introspection. Indeed I had to cringe when one to-be-unnamed young scientist declared a particular spinal pathway to be “the central pathway for embodiment”.

This is a stark misunderstanding of just what embodiment means, and I would argue renders it a relatively trivial add-on to the information processing paradigm- something most enactivists would like to strongly resist. I politely pointed the gentleman to the example work of Ulric Neisser, who argued for the ecological embodied self, in which the structure of the face is said to pre-structure human experience in particular ways, i.e. we typically experience ourselves as moving through the world toward a central fovea-centered point. Embodiment is an external, or pre-noetic structuring of the mind; it follows no nervous pathway but rather structures the possibilities of the nervous system and mind. I hope he follows that reference down the rabbit hole of the full possibilities of embodiment- the least of which is body-as-extra-module.

Still, I certainly couldn’t blame this particular scientist for his mis-understanding; nearly everyone at the SRI is totally unfamiliar with externalist/phenomenal perspectives, which is a sad state of affairs for a generation of scientists being supported by grants in Varela’s name. Regardless of Varela’s vision for cognitive science, his thesis regarding introspectionism is certainly running strong: first-person methodologies are the hot topic of the SRI, and nearly everyone agreed that by studying contemplative practitioners’ subjective reports, we’d gain some special insight into the mind.  Bracketing whether introspection is what Varela really meant by neurophenomenology (I don’t think it is- phenomenology is not introspection) we are brought to the central question: are Buddhist practitioners expert introspectionists?

Expertise and Introspectionism

Expert introspectionists?Varela certainly believed this to some degree. It’s not entirely clear to me that the bulk of Varela’s work summates to this maxim, but it’s at least certainly true that in papers such as his seminal “Neurophenomenology: a methodological remedy to the hard problem?” he argued that a careful first-person methodology could reap great benefits in this arena. Varela later followed up this theoretical thesis with his now well-known experiment conducted with then PhD student and my current mentor Antoine Lutz.

While I won’t reproduce the details of this experiment at length here, Lutz and Varela demonstrated that it was in fact possible to inform and constrain electrophysiological measurements through the collection and systemization of first-person reports. It’s worth noting here that the participants in this experiment were every day folks, not meditation practitioners, and that Lutz & Varela developed a special method to integrate the reports rather than taking them simply at face value. In fact, while Varela did often suggest that we might through careful contemplation and collaboration with the Buddhist tradition refine first person methodologies and gain insight into the “hard-problem”, he never did complete these experiments with practitioners, a fact that can likely be attributed to his pre-mature death at the hand of aggressive hepatitis.

Regardless of Varela’s own work, it’s fascinating to me that at today’s SRI, if there is one thing nearly everyone seems to explicitly agree on, it’s that meditation practitioners have some kind of privileged access to experience. I can’t count how many discussions seemed to simply assume the truth of this, regardless of the fact that almost no empirical research has demonstrated any kind of increased meta-cognitive capacity or accuracy in adept contemplatives.

While Antoine and I are in fact running experiments dedicated to answering this question, the fact remains that this research is largely exploratory and without strong empirical leads to work from. While I do believe that some level of meditation practice can provide greater reliability and accuracy in meta-cognitive reports, I don’t see any reason to value the reports of contemplative practitioners above and beyond those of any other particular niche group. If I want to know what it’s like to experience baseball, I’m probably going to ask some professional baseball players and not a Buddhist monk. At several points during the SRI I tried to express just this sentiment; that studying Buddhist monks gives us a greater insight into what-it-is-like to be a monk and not much else. I’m not sure if I succeeded, but I’d like to think I planted a few seeds of doubt.

There are several reasons for this. First, I part with Varela where he assumes that the Buddhist tradition and/or “Buddhist Psychology” have particularly valuable insights (for example, emptiness) that can’t be gleaned from western approaches. It might, but I don’t buy into the idea that the Buddhist tradition is its own kind of scientific approach to the mind; it’s not- it’s religion. For me the middle way means a lifelong commitment to a kind of meta-physical agnosticism, and I refuse to believe that any human tradition has a vast advantage over another. This was never more apparent than during a particularly controversial talk by John Dunne, a Harvard contemplative scholar, whose keynote was dedicated to getting scientists like myself to go beyond the traditional texts and veridical reports of practitioners and to instead engage in what he called “trialogue” in order to discover “what it is practitioners are really doing”. At the end of his talk one of the Dalai Lama’s lead monks actually took great offense, scolding John for “misleading the youth with his western academic approach”. The entire debacle was a perfect case-in-point demonstration of John’s talk; one cannot simply take the word of highly religious practitioners as some kind of veridical statement about the world.

This isn’t to say that we can’t learn a great deal about experience, and the mind, through meditation and careful introspection. I think at an early level it’s enough to just sit with ones breath and suspend beliefs about what exactly experience is. I do believe that in our modern lives; we spend precious little time with the body and our minds, simply observing what arises in a non-partial way.  I agree with Sogyal Rinpoche that we are at times overly dis-embodied and away from ourselves. Yet this practice isn’t unique to Buddhism; the phenomenological reduction comes from Husserl and is a practice developed throughout continental phenomenology. I do think that Buddhism has developed some particularly interesting techniques to aid this process, such as Vipassana and compassion-meditation, that can and will shed insights for the cognitive scientist interested in experience, and I hope that my own work will demonstrate as much.

But this is a very different claim from the one that says monastic Buddhists have a particularly special access to experience. At the end of the day I’m going to hedge my bets with the critical, empirical, and dialectical approach of cognitive science. In fact, I think there may be good reasons to suspect that high-level practitioners are inappropriate participants for “neurophenomenology”. Take for example, the excellent and controversial talk given this year by Willoughby Britton, in which she described how contemplative science had been too quick to sweep under the rug a vast array of negative “side-effects” of advanced practice. These effects included hallucination, de-personalization, pain, and extreme terror. This makes a good deal of sense; advanced meditation practice is less impartial phenomenology and more a rigorous ritualized mental practice embedded in a strong religious context. I believe that across cultures many religions share techniques, often utilizing rhythmic breathing, body postures, and intense belief priming to engender an almost psychedelic state in the practitioner.

What does this mean for cognitive science and enactivism? First, it means we need to respect cultural boundaries and not rush to put one cultural practice on top of the explanatory totem pole. This doesn’t mean cognitive scientists shouldn’t be paying attention to experience, or even practicing and studying meditation, but we have to be careful not to ignore the normativity inherent in any ritualized culture. Embracing this basic realization takes seriously individual and cultural differences in consciousness, something I’ve argued for and believe is essential for the future of 4th wave cognitive science. Neurophenomenology, among other things, should be about recognizing and describing the normativity in our own practices, not importing those of another culture wholesale. I think that this is in line with much of what Varela wrote, and luckily, the tools to do just this are richly provided by the continental phenomenological tradition.

I believe that by carefully bracketing meta-physical and normative concepts, and investigating the vast multitude of phenomenal experience in its full multi-cultural variety, we can begin to shed light on the mind-brain relationship in a meaningful and not strictly reductive fashion. Indeed, in answering the question “are monks expert introspectionists” I think we should carefully question the normative thesis underlying that hypothesis- what exactly constitutes “good” experiential reports? Perhaps by taking a long view on Buddhism and cognitive science, we can begin to truly take the middle way to experience, where we view all experiential reports as equally valid statements regarding some kind of subjective state. The question then becomes primarily longitudinal, i.e. do experiential reports demonstrate a kind of stability or consistency over time, how do trends in experiential reports relate to neural traits and states, and how do these phenomena interact with the particular cultural practices within which they are embedded. For me, this is the central contribution of enactive cognitive science and the best way forward for neurophenomenology.

Disclaimer: I am in no way suggesting enactivists cannot or should not study advanced buddhism if that is what they find interesting and useful. I of course realize that the M&L SRI is a very particular kind of meeting, and that many enactive cognitive scientists can and do work along the lines I am suggesting. My claim is regarding best practices for the core of 4th wave cognitive science, not the fringe. I greatly value the work done by the M&L and found the SRI to be an amazingly fruitful experience.

My response to Carr and Pinker on Media Plasticity

Our ongoing discussion regarding the moral panic surrounding Nicolas Carr’s book The Shallows continues over at Carr’s blog today, with his recent response to Pinker’s slamming the book. I maintain that there are good and bad (frightening!!) things in both accounts. Namely, Pinker’s stolid refusal to acknowledge the research I’ve based my entire PhD on, and Carr’s endless fanning of the one-sided moral panic.

Excerpt from Carr’s Blog:

Steven Pinker and the Internet

And then there’s this: “It’s not as if habits of deep reflection, thorough research and rigorous reasoning ever came naturally to people.” Exactly. And that’s another cause for concern. Our most valuable mental habits – the habits of deep and focused thought – must be learned, and the way we learn them is by practicing them, regularly and attentively. And that’s what our continuously connected, constantly distracted lives are stealing from us: the encouragement and the opportunity to practice reflection, introspection, and other contemplative modes of thought. Even formal research is increasingly taking the form of “power browsing,” according to a 2008 University College London study, rather than attentive and thorough study. Patricia Greenfield, a professor of developmental psychology at UCLA, warned in a Science article last year that our growing use of screen-based media appears to be weakening our “higher-order cognitive processes,” including “abstract vocabulary, mindfulness, reflection, inductive problem solving, critical thinking, and imagination.”

As someone who has enjoyed and learned a lot from Steven Pinker’s books about language and cognition, I was disappointed to see the Harvard psychologist write, in Friday’s New York Times, a cursory op-ed column about people’s very real concerns over the Internet’s influence on their minds and their intellectual lives. Pinker seems to dismiss out of hand the evidence indicating that our intensifying use of the Net and related digital media may be reducing the depth and rigor of our thoughts. He goes so far as to assert that such media “are the only things that will keep us smart.” And yet the evidence he offers to support his sweeping claim consists largely of opinions and anecdotes, along with one very good Woody Allen joke.

Right here I would like to point out the kind of leap Carr is making. I’d really like a closer look at the supposed evidence demonstrating  “our intensifying use of the Net and related digital media may be reducing the depth and rigor of our thoughts.” This is a huge claim! How does one define the ‘depth’ and ‘rigor’ of our thoughts? I know of exactly one peer-reviewed high impact paper demonstrating a loss of specifically executive function in heavy-media multi-taskers. While there is evidence that generally speaking, multi-tasking can interfere with some forms of goal-directed activity, I am aware of no papers directly linking specific forms of internet behavior to a drop in executive function. Furthermore, the HMM paper included in it’s measure of multi-tasking ‘watching tv’, ‘viewing funny videos’, and ‘playing videogames’. I don’t know about you, but for me there is definitely a difference between ‘work’ multitasking, in which I focus and work through multiple streams, and ‘play’ multitasking, in which I might casually surf the net while watching TV. The second claim is worse- what exactly is ‘depth’? And how do we link it to executive functioning?

Is Carr claiming people with executive function deficits are incapable or impaired in thinking creatively? If it takes me 10 years to publish a magnum opus, have I thought less deeply than the author that cranks out a feature length popular novel every 2 years? Depth involves a normative judgment of what separates ‘good’ thinking from ‘bad’ thinking, and to imply there is some kind of peer-reviewed consensus here is patently false. In fact, here is a recent review paper on fmri creativity research (is this depth?) indicating that the existing research is so incredibly disparate and poorly defined as to be untenable. That’s the problem with Carr’s claims- he oversimplifies both the diversity of internet usage and the existing research on executive and creative function. To be fair to Carr, he does go on to do a fair job of dismantling Pinker’s frighteningly dogmatic rejection of generalizable brain plasticity research:

One thing that didn’t surprise me was Pinker’s attempt to downplay the importance of neuroplasticity. While he acknowledges that our brains adapt to shifts in the environment, including (one infers) our use of media and other tools, he implies that we need not concern ourselves with the effects of those adaptations. Because all sorts of things influence the brain, he oddly argues, we don’t have to care about how any one thing influences the brain. Pinker, it’s important to point out, has an axe to grind here. The growing body of research on the adult brain’s remarkable ability to adapt, even at the cellular level, to changing circumstances and new experiences poses a challenge to Pinker’s faith in evolutionary psychology and behavioral genetics. The more adaptable the brain is, the less we’re merely playing out ancient patterns of behavior imposed on us by our genetic heritage.

Here is my response, posted on Nick’s blog:

Hi Nick,

As you know from our discussion at my blog, I’m not really a fan of the extreme views given by either you or Pinker. However, I applaud the thorough rebuttal you’ve given here to Stephen’s poorly researched response. As someone doing my PhD in neuroplasticity and cognitive technology, it absolutely infuriated me to see Stephen completely handwave away a decade of solid research showing generalizable cognitive gains from various forms of media-practice. To simply ignore findings from, for example the Bavalier lab, that demonstrate reliable and highly generalizable cognitive and visual gains and plasticity is to border on the unethically dogmatic.

Pinker isn’t well known for being flexible within cognitive science however; he’s probably the only person even more dogmatic about nativist modularism than Fodor. Unfortunately, Stephen enjoys a large public following and his work has really been embraced by the anti-religion ‘brights’ movement. While on some levels I appreciate this movement’s desire to promote rationality, I cringe at how great scholars like Dennett and Pinker seem totally unwilling to engage with the expanding body of research that casts a great deal of doubt on the 1980’s era cogsci they built their careers on.

So I give you kudos there. I close as usual, by saying that you’re presenting a ‘sexy’ and somewhat sensationalistic account that while sure to sell books and generate controversy, is probably based more in moral panic than sound theory. I have no doubt that the evidence you’ve marshaled demonstrates the cognitive potency of new media. Further, I’m sure you are aware of the heavy-media multitasking paper demonstrating a drop in executive functioning in HMMs.

However, you neglect in the posts I’ve seen to emphasize what those authors clearly did: that these findings are not likely to represent a true loss of function but rather are indicators of a shift in cognitive style. Your unwillingness to declare the normative element in your thesis regarding ‘deep thought’ is almost as chilling as Pinker’s total refusal to acknowledge the growing body of plasticity research. Simply put, I think you are aware that you’ve conflated executive processing with ‘deep thinking’, and are not really making the case that we know to be true.

Media is a tool like any other. It’s outcome measures are completely dependent on how we use it and our individual differences. You could make this case quite well with your evidence, but you seem to embrace the moral panic surrounding your work. It’s obvious that certain patterns, including the ones probably driving your collected research, will play on our plasticity to create cognitive differences. Plasticity is limited however, and you really don’t play on the most common theme in mental training literature: balance and trade-off. Your failure to acknowledge the economical and often conservative nature of the brain forces me to lump your work in with the decade that preceded your book, in which it was proclaimed that violent video games and heavy metal music would rot our collective minds. These things didn’t happen, except in those who where already at high risk, and furthermore they produced unanticipated cognitive gains. I think if you want to be on the ‘not wrong’ side of history, you may want to introduce a little flexibility to your argument. I guess if it makes you feel better, for many in the next generation of cognition researchers, it’s already too late for a dogmatic thinker like Pinker.

Final thoughts?

Slides for my Zombies or Cyborgs Talk



Follow

Get every new post delivered to your Inbox.

Join 12,540 other followers

%d bloggers like this: