Enactive Bayesians? Response to “the brain as an enactive system” by Gallagher et al

Shaun Gallagher has a short new piece out with Hutto, Slaby, and Cole and I felt compelled to comment on it. Shaun was my first mentor and is to thank for my understanding of what is at stake in a phenomenological cognitive science. I jumped on this piece when it came out because, as I’ve said before, enactivists often  leave a lot to be desired when talking about the brain. That is to say, they more often than not leave it out entirely and focus instead on bodies, cultural practices, and other parts of our extra-neural milieu. As a neuroscientist who is enthusiastically sympathetic to the embodied, enactive approach to cognition, I find this worrisome. Which is to say that when I’ve tried to conduct “neurophenomenological” experiments, I often feel a bit left in the rain when it comes time construct, analyze, and interpret the data.

As an “enactive” neuroscientist, I often find the de-emphasis of brains a bit troubling. For one thing, the radically phenomenological crew tends to make a lot of claims to altering the foundations of neuroscience. Things like information processing and mental representation are said to be stale, Cartesian constructs that lack ontological validity and want to be replaced. This is fine- I’m totally open to the limitations of our current explanatory framework. However as I’ve argued here, I believe neuroscience still has great need of these tools and that dynamical systems theory is not ready for prime time neuroscience. We need a strong positive account of what we should replace them with, and that account needs to act as a practical and theoretical guide to discovery.

One worry I have is that enactivism quickly begins to look like a constructivist version of behaviorism, focusing exclusively on behavior to the exclusion of the brain. Of course I understand that this is a bit unfair; enactivism is about taking a dynamical, encultured, phenomenological view of the human being seriously. Yet I believe to accomplish this we must also understand the function of the nervous system. While enactivists will often give token credit to the brain- affirming that is indeed an ‘important part’ of the cognitive apparatus, they seem quick to value things like clothing and social status over gray matter. Call me old fashioned but, you could strip me of job, titles, and clothing tomorrow and I’d still be capable of 80% of whatever I was before. Granted my cognitive system would undergo a good deal of strain, but I’d still be fully capable of vision, memory, speech, and even consciousness. The same can’t be said of me if you start magnetically stimulating my brain in interesting and devious ways.

I don’t want to get derailed arguing about the explanatory locus of cognition, as I think one’s stances on the matter largely comes down to whatever your intuitive pump tells you is important.  We could argue about it all day; what matters more than where in the explanatory hierarchy we place the brain, is how that framework lets us predict and explain neural function and behavior. This is where I think enactivism often fails; it’s all fire and bluster (and rightfully so!) when it comes to the philosophical weaknesses of empirical cognitive science, yet mumbles and missteps when it comes to giving positive advice to scientists. I’m all for throwing out the dogma and getting phenomenological, but only if there’s something useful ready to replace the methodological bathwater.

Gallagher et al’s piece starts:

 “… we see an unresolved tension in their account. Specifically, their questions about how the brain functions during interaction continue to reflect the conservative nature of ‘normal science’ (in the Kuhnian sense), invoking classical computational models, representationalism, localization of function, etc.”

This is quite true and an important tension throughout much of the empirical work done under the heading of enactivism. In my own group we’ve struggled to go from the inspiring war cries of anti-representationalism and interaction theory to the hard constraints of neuroscience. It often happens that while the story or theoretical grounding is suitably phenomenological and enactive, the methodology and their interpretation are necessarily cognitivist in nature.

Yet I think this difficulty points to the more difficult task ahead if enactivism is to succeed. Science is fundamentally about methodology, and methodology reflects and is constrained by one’s ontological/explanatory framework. We measure reaction times and neural signal lags precisely because we buy into a cognitivist framework of cognition, which essentially argues for computations that take longer to process with increasing complexity, recruiting greater neural resources. The catch is, without these things it’s not at all clear how we are to construct, analyze, and interpret our data.  As Gallagher et al correctly point out, when you set out to explain behavior with these tools (reaction times and brain scanners), you can’t really claim to be doing some kind of radical enactivism:

 “Yet, in proposing an enactive interpretation of the MNS Schilbach et al. point beyond this orthodox framework to the possibility of rethinking, not just the neural correlates of social cognition, but the very notion of neural correlate, and how the brain itself works.”

We’re all in agreement there: I want nothing more than to understand exactly how it is our cerebral organ accomplishes the impressive feats of locomotion, perception, homeostasis, and so on right up to consciousness and social cognition. Yet I’m a scientist and no matter what I write in my introduction I must measure something- and what I measure largely defines my explanatory scope. So what do Gallagher et al offer me?

 “The enactive interpretation is not simply a reinterpretation of what happens extra-neurally, out in the intersubjective world of action where we anticipate and respond to social affordances. More than this, it suggests a different way of conceiving brain function, specifically in non-representational, integrative and dynamical terms (see e.g., Hutto and Myin, in press).”

Ok, so I can’t talk about representations. Presumably we’ll call them “processes” or something like that. Whatever we call them, neurons are still doing something, and that something is important in producing behavior. Integrative- I’m not sure what that means, but I presume it means that whatever neurons do, they do it across sensory and cognitive modalities. Finally we come to dynamical- here is where it gets really tricky. Dynamical systems theory (DST) is an incredibly complex mathematical framework dealing with topology, fluid dynamics, and chaos theory. Can DST guide neuroscientific discovery?

This is a tough question. My own limited exposure to DST prevents me from making hard conclusions here. For now let’s set it aside- we’ll come back to this in a moment. First I want to get a better idea of how Gallagher et al characterize contemporary neuroscience, the source of this tension in Schillbach et al:

Functional MRI technology goes hand in hand with orthodox computational models. Standard use of fMRI provides an excellent tool to answer precisely the kinds of questions that can be asked within this approach. Yet at the limits of this science, a variety of studies challenge accepted views about anatomical and functional segregation (e.g., Shackman et al. 2011; Shuler and Bear 2006), the adequacy of short-term task- based fMRI experiments to provide an adequate conception of brain function (Gonzalez-Castillo et al. 2012), and individual differences in BOLD signal activation in subjects performing the same cognitive task (Miller et al. 2012). Such studies point to embodied phenomena (e.g., pain, emotion, hedonic aspects) that are not appropriately characterized in representational terms but are dynamically integrated with their central elaboration.

Claim one is what I’ve just argued above, that fMRI and similar tools presuppose computational cognitivism. What follows I feel is a mischaracterization of cognitive neuroscience. First we have the typical bit about functional segregation being extremely limited. It surely is and I think most neuroscientists today would agree that segregation is far from the whole story of the brain. Which is precisely why the field is undeniably and swiftly moving towards connectivity and functional integration, rather than segregation. I’d wager that for a few years now the majority of published cogneuro papers focus on connectivity rather than blobology.

Next we have a sort of critique of the use of focal cognitive tasks. This almost seems like a critique of science itself; while certainly not without limits, neuroscientists rely on such tasks in order to make controlled assessments of phenomena. There is nothing a priori that says a controlled experiment is necessarily cognitivist anymore so than a controlled physics experiment must necessarily be Newtonian rather than relativistic. And again, I’d characterize contemporary neuroscience as being positively in love with “task-free” resting state fMRI. So I’m not sure at what this criticism is aimed.

Finally there is this bit about individual differences in BOLD activation. This one I think is really a red herring; there is nothing in fMRI methodology that prevents scientists from assessing individual differences in neural function and architecture. The group I’m working with in London specializes in exactly this kind of analysis, which is essentially just creating regression models with neural and behavioral independent and dependent variables. There certainly is a lot of variability in brains, and neuroscience is working hard and making strides towards understanding those phenomena.

 “Consider also recent challenges to the idea that so-called “mentalizing” areas (“cortical midline structures”) are dedicated to any one function. Are such areas activated for mindreading (Frith and Frith 2008; Vogeley et al. 2001), or folk psychological narrative (Perner et al. 2006; Saxe & Kanwisher 2003); a default mode (e.g., Raichle et al. 2001), or other functions such as autobiographical memory, navigation, and future planning (see Buckner and Carroll 2006; 2007; Spreng, Mar and Kim 2008); or self -related tasks(Northoff & Bermpohl 2004); or, more general reflective problem solving (Legrand andRuby 2010); or are they trained up for joint attention in social interaction, as Schilbach etal. suggest; or all of the above and others yet to be discovered.

I guess this paragraph is supposed to get us thinking that these seem really different, so clearly the localizationist account of the MPFC fails. But as I’ve just said, this is for one a bit of a red herring- most neuroscientists no longer believe exclusively in a localizationist account. In fact more and more I hear top neuroscientists disparaging overly blobological accounts and referring to prefrontal cortex as a whole. Functional integration is here to stay. Further, I’m not sure I buy their argument that these functions are so disparate- it seems clear to me that they all share a social, self-related core probably related to the default mode network.

Finally, Gallagher and company set out to define what we should be explaining- behavior as “a dynamic relation between organisms, which include brains, but also their own structural features that enable specific perception-action loops involving social and physical environments, which in turn effect statistical regularities that shape the structure of the nervous system.” So we do want to explain brains, but we want to understand that their setting configures both neural structure and function. Fair enough, I think you would be hard pressed to find a neuroscientist who doesn’t agree that factors like environment and physiology shape the brain. [edit: thanks to Bryan Patton for pointing out in the comments that Gallagher’s description of behavior here is strikingly similar to accounts given by Friston’s Free Energy Principle predictive coding account of biological organisms]

Gallagher asks then, “what do brains do in the complex and dynamic mix of interactions that involve full-out moving bodies, with eyes and faces and hands and voices; bodies that are gendered and raced, and dressed to attract, or to work or play…?” I am glad to see that my former mentor and I agree at least on the question at stake, which seems to be, what exactly is it brains do? And we’re lucky in that we’re given an answer by Gallagher et al:

“The answer is that brains are part of a system, along with eyes and face and hands and voice, and so on, that enactively anticipates and responds to its environment.”

 Me reading this bit: “yep, ok, brains, eyeballs, face, hands, all the good bits. Wait- what?” The answer is “… a system that … anticipates and responds to its environment.” Did Karl Friston just enter the room? Because it seems to me like Gallagher et al are advocating a predictive coding account of the brain [note: see clarifying comment by Gallagher, and my response below]! If brains anticipate their environment then that means they are constructing a forward model of their inputs. A forward model is a Bayesian statistical model that estimates posterior probabilities of a stimulus from prior predictions about its nature. We could argue all day about what to call that model, but clearly what we’ve got here is a brain using strong internal models to make predictions about the world. Now what is “enactive” about these forward models seems like an extremely ambiguous notion.

To this extent, Gallagher includes “How an agent responds will depend to some degree on the overall dynamical state of the brain and the various, specific and relevant neuronal processes that have been attuned by evolutionary pressures, but also by personal experiences” as a description of how a prediction can be enactive. But none of this is precluded by the predictive coding account of the brain. The overall dynamical state (intrinsic connectivity?) of the brain amounts to noise that must be controlled through increasing neural gain and precision. I.e., a Bayesian model presupposes that the brain is undergoing exactly these kinds of fluctuations and makes steps to produce optimal behavior in the face of such noise.

Likewise the Bayesian model is fully hierarchical- at all levels of the system the local neural function is constrained and configured by predictions and error signals from the levels above and below it. In this sense, global dynamical phenomena like neuromodulation structure prediction in ways that constrain local dynamics.  These relationships can be fully non-linear and dynamical in nature (See Friston 2009 for review). Of the other bits –  evolution and individual differences, Karl would surely say that the former leads to variation in first priors and the latter is the product of agents optimizing their behavior in a variable world.

So there you have it- enactivist cognitive neuroscience is essentially Bayesian neuroscience. If I want to fulfill Gallagher et al’s prescriptions, I need merely use resting state, connectivity, and predictive coding analysis schemes. Yet somehow I think this isn’t quite what they meant- and there for me, lies the true tension in ‘enactive’ cognitive neuroscience. But maybe it is- Andy Clark recently went Bayesian, claiming that extended cognition and predictive coding are totally compatible. Maybe it’s time to put away the knives and stop arguing about representations. Yet I think an important tension remains: can we explain all the things Gallagher et al list as important using prior and posterior probabilities? I’m not totally sure, but I do know one thing- these concepts make it a hell of a lot easier to actually analyze and interpret my data.

fake edit:

I said I’d discuss DST, but ran out of space and time. My problem with DST boils down to this: it’s descriptive, not predictive. As a scientist it is not clear to me how one actually applies DST to a given experiment. I don’t see any kind of functional ontology emerging by which to apply the myriad of DST measures in a principled way. Mental chronometry may be hokey and old fashioned, but it’s easy to understand and can be applied to data and interpreted readily. This is a huge limitation for a field as complex as neuroscience, and as rife with bad data. A leading dynamicist once told me that in his entire career “not one prediction he’d made about (a DST measure/experiment) had come true, and that to apply DST one just needed to “collect tons of data and then apply every measure possible until one seemed interesting”. To me this is a data fishing nightmare and does not represent a reliable guide to empirical discovery.

26 thoughts on “Enactive Bayesians? Response to “the brain as an enactive system” by Gallagher et al

  1. Nice piece Micah.

    “a dynamic relation between organisms, which include brains, but also their own structural features that enable specific perception-action loops involving social and physical environments, which in turn effect statistical regularities that shape the structure of the nervous system.”

    Hello Free Energy…

    DST is easy just use the ODE solver in Matlab to find the set of differentials that best fit your data. :)

    Not sure I entirely agree with: “Claim one is what I’ve just argued above, that fMRI and similar tools presuppose computational cognitivism.” Increasing computations leads to more BOLD (increase in work in the informational sense) but increasing the frequency by which a homoclinic orbit or a heteroclinic cycle completes would also seem to lead to more BOLD (increase in work in a thermodynamic sense).

    • Thanks for the kind words and your comment Brian! Sort of tumbled out of me when I was having my morning coffee. I agree totally- the FEP is totally compatible with Gallagher’s claims though I think they may make uneasy bed fellows. But I have to say that during Karl’s entire lecture I was thinking “in a broad sense, this is compatible with enactivism”.

      As for the bit about BOLD = cognitivism, I agree I was being a bit overly rhetorical here and that measure bold does not necessarily equal mental chronometry. Of course it comes down to precisely how you measure it and what inferences you make- I think many would say inferring free energy in a dynamic causal model is not cognitivist in the strict since but instead a specific implementation of DST. Perhaps the problem is equating computationalism with cognitivism, which doesn’t seem strictly necessary.

  2. Micah,
    I won’t call you old fashioned, but let me say it’s great when a former student and now colleague can set out some nice thoughtful arguments in disagreement. I’m not sure we entirely disagree — but you have a finger on the pulse of cutting-edge neuroscience (in London and Aarhus) and can see some things that I can’t.

    On one point I think you were too quick to translate ‘system’ into ‘brain’. The system that I mean is not just the brain. I don’t deny that the brain may be set up to be anticipatory, and that neuroscience needs to tell us the details about that and how it works. But that’s one part of the puzzle. The system is more than just the brain, and it will take more than neuroscience to understand it. That was our point. I assume that you don’t disagree on that.


    • Hi Shaun!

      Thanks for commenting, it’s great to have your response here! They say you never forget where you come from and that’s definitely true in my case. You certainly did a great job making me aware of the assumptions that cognitive neuroscience relies on, and I guess I’m at that stage where I’m really testing the ontological waters and pushing out against my own (embodied) assumptions. Your right in that we totally agree that it takes much more than neuroscience to understand human beings. You know me though- I’ve always been partial to brains, and it was only a matter of time before my old neural roots started reconfiguring my theoretical standpoint. In that sense i’ve often become frustrated with what seems to me an overly theoretical approach to enactivism. I’m not really striking out here at you, but I was disappointed by how my recent talk at the Aarhus neurophenomenology workshop was received by folks like Michel Bitbol.

      I basically pleaded for a new neurophenomenology that was focused on the practical question of, how do we ask subjects about their experience, and was told that “the phenomenological interview does not (full stop) introduce bias”. I think Varela really strove to combine the empirical and theoretical in interesting ways and sadly a lot of that project stalled with his death. In the meantime, Scientists keep right along asking about subjects experience with overly simplistic Likert scales and inattention to cultural factors.

      Perhaps the problem was that, Varela really left around the time when he was still making quite a bit of argumentation over representations in the brain, and that became the main focus of enactivism. It seems like really, most enactivists are more concerned with particular neglected behaviors (like live interaction) than arguing about neural ontology/information theory. It’s worth noting that Chris Frith has basically made it his mission these past 5 years to argue that without truly live interaction, one is not studying “core” social cognition.

      On a somewhat funny note, Michel did excitedly claim that Husserl was very Bayesian, so I guess everyone is getting on that bandwagon these days!

      Thanks for your response,

      p.s. Sorry if my tone was a bit harsh in places! The blog thing is very off the cuff, stream-of-consciousness. I’m sure if I turned this into a publication I’d spend a lot more time being fair to enactivism, but here I just wanted to try and encapsulate my issues while they where fresh in mind.

      Just rereading your comment, and I think that you make a great point about systems versus brains. It’s clear that you were trying to suggest that it’s the entire embodied system that anticipates the world, i.e. eye saccades, gestures, and so on. One thing I think is important to be clear about here is that this claim is totally compatible with the FEP/predictive coding account of the brain. On Friston’s view, internal prediction errors actually necessitate actions that reduce those errors; we actively sample our world in ways that conform to our priors. So it seems to me like yes, the embodied/enactive view can be totally compatible with the FEP!

  3. Nice to see some discussions coming out of UCF (I’m undergrad alum). I now am working on DST with respect to neural dynamics in perceptual systems (primarily vision). Our models, while fairly simple, make testable predictions that help to guide our ongoing empirical work. I admittedly just scanned over your article, but plan to read it carefully in the next day or two. Maybe we could chat about how DST can be applied to neural systems in a way that is not merely descriptive.

    • Okay, I’ve read your piece. I agree that this tension in some ways feels artificial, like the fight is over which word to use rather than there being an actual difference in the conceptualization. I’ve made a similar critique on a blog post which describes the robot ‘big dog’ (inadequately, I argue). Here’s the link, look to the comment section if you’re interested. http://www.psychologytoday.com/blog/cognition-without-borders/201206/tale-two-robots

      • Thanks Joe, I had a pretty good laugh when I saw figure 5. I remember watching the video and thinking “this thing is definitely using a predictive coding scheme to get error adaptation that smooth”. Feels good to be right sometimes :)

  4. Micah, I really enjoyed this piece, and see it as capturing a central dynamic – or perhaps difference in orientation – in how people approach thinking about neural function. Neuroanthropology, like the enactivist approach of Gallagher, is probably strongest in trying to reconfigure and better integrate what a “dynamical, encultured, phenomenological view of the human being” means for social science research, and from there, implications for neuroscience. We know there are missing pieces in how the brain is being viewed, and we work hard at trying to account for them.

    That leaves the nervous system itself still a bit too much as a blank slate, where the constructivist view makes the brain conform to the system. I think the new generation takes the brain more seriously, and has a much better grasp of the structure and function and how that relates to behavior. But the philosophical position is still more constructivist than determinist (and I’m happy to defend that as closer to what it means to be human…). The problem arises, as you rightly point out, in how such an approach meshes with actual neuroscience research. A big question going forward… Hopefully I’ll get a post up in the next day or two on Neuroanthropology that flushes out some initial thoughts I’ve had.

    One more comment – I really liked your point in the comments, “I basically pleaded for a new neurophenomenology that was focused on the practical question of, how do we ask subjects about their experience, and was told that “the phenomenological interview does not (full stop) introduce bias”. ” First, inevitably there is bias, and second, the practical question is crucially important, particularly for the sort of work that I do on addiction, experience, and neuroanthropology.

    If I had to boil it down to four things, it would be (a) finding questions that help get informants to put things into words, since the words become the data (and can be analyzed using both a quantitative and interpretivist approach); (b) take informants step-by-step through the experience, behavior, situation that you are interested in; don’t take their initial answer as face value, as many of their responses are the stand-in cultural responses that have more to do with conversation and conversation rules than actual experience; (c) ask from multiple angles and ask multiple times – what did you feel? what did you think? what was happening then? etc.; and (d) have explicit questions that come from neural, phenomenological, and cultural views; in your case, particularly to get data that might help shed light on the neural side of things, and help match what we know about brain function with experience.

    One final note – I share your disappointment with Clark trying out the Bayesian approach. Something does get lost in that move, even if other things are gained.

    Best, Daniel

  5. Dear Micah,

    thanks a lot for this comment – I whole-heartedly sympathize with your frustration about the struggle to implement the enactive framework for empirical research in humans. And this is true even though i am a behavioral scientist – how much harder must it be for a neuroscientist. As evident from the resonance this blog post received, your critique is timely and points to important issues, especially your demand for a strong positive account of how to do enactive neuroscience that can be implemented in concrete experiments, using the tools we have at hand. Yet, there are a number of points where I struggle to follow your argument, some of which may be important for the bottom line of this post.

    Let me address them one by one:

    1.) “Call me old fashioned but, you could strip me of job, titles, and clothing tomorrow and I’d still be capable of 80% of whatever I was before.”

    There are several things to be said to that. Firstly, it is difficult to match levels of cultural perturbation and levels of neural perturbation. There are situations that will drive you insane or drive you to suicide – just as there are modulations of brain activity (TDCS, let’s say) that do not phase you that much. Secondly, in enactive accounts, there is often emphasis on the genesis of a cognitive capacity – even if you can walk and talk all by yourself, you would not have been able to acquire language or upright walk had you not developed in a cultural context (cf. Kyselo et al.’s work on locked-in-syndrome and embodiment for analoguous arguments). But maybe you just want to emphasize that the brain is important, too – it sure is and no less than culture/environment/body.

    2.) “It often happens that while the story or theoretical grounding is suitably phenomenological and enactive, the methodology and their interpretation are necessarily cognitivist in nature.”

    This is a very important point you are making. Enactivism wants to look at behavior and cognition in the context of brain-body-environment interaction. For neuroscience, this may be possible for simple organisms, such as fish or insects, using a neuro-ethological approach. It gets, however, much more complicated for mammalian and especially human neuroscience. Ultimately, we can only truly study disconnected parts of the loop. I am with Gallagher et al. that no method is inherently cognitivist or enactivist. If your research agenda makes an effort to work towards a bigger picture and the interpretation of your results points out the limitations of the approach, most results can form part of an enactive cognitive explanation (we measure this correlation that suggests a functional link between neural activity Y and behavior X, but we don’t really know the underlying mechanism or functional context). However, it is very seductive to go with the easier and stronger localist/cognitivist interpretation of a result that can be sold as a simple, convincing, and complete story to a high-impact journal (this brain area computes this function using this kind of circuit). I am the wrong person to point fingers here. After all, we are all under pressure to publish or perish. This is a sad dilemma. But not a theoretical limitation.

    3.) “Functional integration is here to stay.”

    I agree that neuroscience starts to move away from the blobs and looks more for larger scale functional connectivity and this is a step in the right direction. Yet, if I understand it correctly, these kinds of explanations still look for the answers inside the brain (more complex information processing) – this is a problem from an enactive perspective (this may be what you and Shaun were discussing about). A full enactive explanation can never be a circuit diagram.

    One way or the other, keep up the good work!

  6. Hi Micah,

    nice post!

    I read your question to enactivists as: “If I want to be an enactive neuroscientist, how do I go about doing experiments?”

    I wonder if a paper that Ezequiel and I just published called the Interactive Brain Hypothesis goes some way towards an answer to your question. It is important to say that the work we do in that paper is part of the larger (developing) participatory sense-making framework — the enactive approach to social cognition — in which the role of the brain is only one question of many to be addressed. This speaks to Marieke’s point about enactive explanations always being more encompassing than neural explanations, and to your point in conversation with Shaun about the issue at stake being a system of which the brain is only a part, with all the implications of these assertions for how to look at the brain — implications which I think, as you seem to suggest with this piece, we need to get much clearer about.

    That said, in our paper we put forward a general hypothesis about the role of the brain in social situations, according to which “interactive experience and skills play enabling roles in both the development and current function of social brain mechanisms, even in cases where social understanding happens in the absence of immediate interaction” (taken from the abstract). We discuss neuroscientific evidence to support this. Then, in the final part of the paper, we provide 5 guidelines for neuroscientific research. In the background of this is our idea of social interaction defined as a self-organizing process that can, as such, influence the individuals involved in interaction. This entails that social interactions can have synergistic effects, which show up in their dynamic signature as a reduction of the dimensionality of the system and an increased mutual predictability between interpersonal variables. From this, it should be possible to design neuroscientific experiments.

    If you like, we can discuss specific examples that can be found in the paper.

  7. Sorry, I wholeheartedly disagree that “… a system that … anticipates and responds to its environment.” could only be described by Bayesian models. Why not frequentist models? Or for that matter why would a model necessarily invoke statistics at all?

    • Unless a given system can know all of the possible states it could assume and/or has perfect, direct access to the external world (perfect perception) then necessarily any representations it maintains or models it develops must be probabilistic and hence any operations on the contents of those representations or on the model predictions must involve some kind of statistics.

    • I didn’t say it could only be described as Bayesian, just that the basic metaphor is extremely similar in kind to predictive coding accounts. I’m not sure what it would mean for a model to be non-statistical, but I think Bryan did a much better job handling your question.

    • We are talking about models here. Classical mechanics and Maxwell’s Laws are, for example, models. Neither model is stated in terms of statistics.

      • Bryan

        I will look at the referenced site.

        I disagree with your statement: “.. indeed Heisenberg has famously put a hard limit what you can know about quantum states.”. The Heisenberg uncertainty principle pertains to the spread in measurement values of simultaneously measured noncommuting observables. In principle by making more precise instruments a system can be forced more precisely into a single state by measuring one of the observables.

        • What I meant was that for a given system under classical terms it is at least possible to have or obtain a complete description of all of its states. Under quantum terms it is not possible to obtain such exhaustive state descriptions precisely because of the uncertainty principle (actually because matter has a wave-like nature).

          Under classical mechanics you can calculate the partition function of the system and in theory you could find out the value of each individual state in that ensemble. Contrast this with quantum mechanical (or field theory) partition functions (really density matrices) and the underlying conjugate states are fundamentally limited in how much you can resolve the uncertainty of those states, Heisenberg’s uncertainty principle provides a lower bound on the entropic uncertainty of a Fourier (quantum or wave) system:


      • Hi- thanks for your comment. Strictly speaking, the Free Energy Principle is not statistical, it’s thermodynamic. The Bayesian Brain would be a statistical approximation of the underlying thermodynamic principle of free energy minimization.

        • Free Energy is information theoretic, it just wishes it was also thermodynamic. Until someone proves that information theoretic entropy and thermodynamic entropy are the same or equivalent then the use of Gibbs Energy in Free Energy formulations is speculative but promising.

          Classical Mechanics and Maxwell’s equations are indeed models without (necessarily) any statistics (although look at Statistical Mechanics). One reason why they tend not be “statistical” is the underlying assumption that you can indeed get unfettered access to the states of nature and thus the mathematical descriptions are as close a description to the natural process as can be had. What underlies these models though is highly probabilistic even if the form of the probability is very different from classical conceptions.

      • Bryan and Micah

        By some of the language you are using it looks like the language of E.T. Jaynes (I had a wonderful email exchange with him a few years before his death!) as pertains to the connection between information theory and the Gibbs distribution of equilibrium thermodynamics has creeped into neuroscience. Is that true?

        Note: The brain is not in thermodynamic equilibrium. The field of non-equilibrium thermodynamics is a thorny one with no (presently and probably never will) distribution analogous to Gibb’s to help us.


        I do not understand the point you are trying to make with this sentence.
        “What underlies these models though is highly probabilistic even if the form of the probability is very different from classical conceptions.”

        • Free Energy is very much based on the Gibbs conception, indeed Free Energy minimisation is of the same form as the Gibbs algorithm. That is the brain attempts to minimise the long term average of surprisal (Friston equates this directly with entropy ie Free Energy) on the assumption that the MaxEnt distribution is the normal distribution (the Laplace assumption) that is the distribution which expresses the maximum entropy given the first and second order sufficient stastitics (mean and variance).Importantly the suprisal is a tight bound on surprise but is never exactly equal to it. Have a read of this page:


          to get an idea for how the Bayes formulation (Free Energy) works.

          The brain (indeed most forms of life) live very far from equilibrium but that is fine as a given system does not need to be in equilibrium itself merely in equilibrium as long term average with the Universe.

          DS: Sorry that oblique sentence was referring to the difference between classical probability and quantum probability the former which in most cases is due to a lack of complete knowledge of a system. Contrast this with quantum probabilities which at least on the micro-level are important (not so much at macro levels). The probability in quantum states cannot be countered by acquiring more information, indeed Heisenberg has famously put a hard limit on what you can know about quantum states.

  8. Bryan

    I still disagree. A quantum state is fully specified by the values of its complete set of commuting observables. Commuting observables can according to Heisenberg be measured simultaneous with no spread in their values.

    I am not sure why you have brought statistical mechanics, the uncertainties of which are not necessarily based on Heisenberg’s uncertainty principle, into the discussion of the specification of a state unless what you are trying to say is that quantum statistical states, as described by a density operator, usually incorporate both kinds of uncertainty.

    But I am going to drop this discussion because it is straying far from neuroscience.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s