UPDATED WITH ANSWERS – summary of the major questions [and answers] asked at #LSEbrain about the Bayesian Brain Hypothesis

ok here are the answers! meant to release them last night but was a bit delayed by sleep🙂

OK it is about 10pm here and I’ve got an HBM abstract to submit but given that the LSE wasn’t able to share the podcast, i’m just going to quickly summarize some of the major questions brought up either by the speakers or audience during the event.

For those that don’t know, the LSE hosted a brief event tonight exploring the question: “is the brain a predictive machine”, with panelists Paul Fletcher, Karl Friston, Demis Hassabis, Richard Holton and chaired by Benedetto De Martino. I enjoyed the event as it was about the right length and the discussion was lively. For those familiar with Bayesian brain/predictive coding/FEP there wasn’t much new information, but it was cool to see an outside audience react.

These were the principle questions that came up in the course of the event. Keep in mind these are just reproduced from my (fallible) memory:

  • What does it mean if someone acts, thinks, or otherwise behaves irrationally/non-optimally. Can their brain still be Bayesian at a sub-personal level?
    • There were a variety of answers to this question, with the most basic being that optimal behavior depends on ones prior, so someone with a mental disorder or poor behavior may be acting optimally with respect to their priors. Karl pointed out that that this means optimal behavior really is different for every organism and person, rendering the notion of optimal trivial.
  • Instead of changing the model, is it possible for the brain to change the world so it fits with our model of it?
    • Yes, Karl calls this active inference and it is a central part of his formulation of the Bayesian brain. Active inference allows you to either re-sample or adjust the world such that it fits with your model, and brings in a kind of strong embodiment to the Bayesian brain. This is because the kinds of actions  (and perceptions) one can engage in are shaped by the body and internal states,
  • Where do the priors come from?
    • Again the answer from Karl – evolution. According to the FEP, organisms who survive do so in virtue of their ability to minimize free energy (prediction error). This means that for Karl evolution ‘just is the refinement and inheritance of our models of the world’; our brains reflect the structure of the world which is then passed on through natural selection and epigenetic mechanisms.
  • Is the theory falsifiable and if so, what kind of data would disprove it?
    • From Karl – ‘No. The theory is not falsifiable in the same sense that Natural Selection is not falsifiable’. At this there were some roars from the crowd and philosopher Richard Holton was asked how he felt about this statement. Richard said he would be very hesitant to endorse a theory that claimed to be non-falsifiable.
  • Is it possible for the brain to ‘over-fit’ the world/sensory data?
    • Yes, from Paul we heard that this is a good description of what happens in psychotic or other mental disorders, where an overly precise belief might resist any attempts to dislodge it or evidence to the contrary. This lead back into more discussion of what it means for an organism to behave in a way that is not ‘objectively optimal’.
  • If we could make a Bayesian deep learning machine would it be conscious, and if so what rights should we give it?
    • I didn’t quite catch Demis response to this as it was quite quick and there was a general laugh about these types of questions coming up.
  • How exactly is the brain Bayesian? Does it follow a predictive coding, approximate, or variational Bayesian implementation?
    • Here there was some interesting discussion from all sides, with Karl saying it may actually be a combination of these methods or via approximations we don’t yet understand. There was a lot of discussion about why Deep Brain doesn’t implement a Bayesian scheme in their networks, and it was revealed that it is because hierarchical Bayesian inference is currently too computationally demanding for such applications. Karl picked up on this point to say that the same is true of the human brain; the FEP outlines some general principles but we are still far from understanding how the brain actually approximates Bayesian inference.
  • Can conscious beliefs, or decisions in the way we typically think of them, be thought of in a probabilistic way?’
    • Karl: ‘Yes’
    • Holton: Less sure
    • Panel: this may call for multiple models, binary vs discrete, etc
    • Karl redux: isn’t it interesting how now we are increasingly reshaping the world to better model our predictions, i.e. using external tools in place of memory, navigation, planning, etc (i.e. extended cognition)

There were other small bits of discussion, particularly concerning what it means for an agent to be optimal or not, and the relation of explicit/conscious states to a subpersonal Bayesian brain, but I’m afraid I can’t recall them in enough detail to accurately report them. Overall the discussion was interesting and lively, and I presume there will be some strong opinions about some of these. There was also a nice moment where Karl repeatedly said that the future of neuroscience was extended and enactive cognition. Some of the discussion between the panelist was quite interesting, particularly Paul’s views on mental disorders and Demis talking about why the brain might engage in long-term predictions and imagination (because collecting real data is expensive/dangerous).

Please write in the comments if I missed anything. I’d love to hear what everyone thinks about these. I’ve got my opinions particularly about the falsification question, but I’ll let others discuss before stating them.

7 thoughts on “UPDATED WITH ANSWERS – summary of the major questions [and answers] asked at #LSEbrain about the Bayesian Brain Hypothesis

  1. Micah, I’m impressed by your speed and memory; I’m planning a post on the thoughts that this event inspired, this summary of yours will be very helpful for me, so thanks!
    On the “consciousness & rights of a proper learning machine” question, I think that De Martino simply rejected the question as it was outside the scope of the event. [I don’t recall any additional snippet on this either]

    I have some recollections on ” the relation of explicit/conscious states to a subpersonal bayesian brain”, they come with the usual warning about my faulty memory, with the additional problem that it’s quite possible that I may be inadvertently inserting some of my own thoughts.

    Anyway IIRC, one way this topic was addressed comes from the first question you list: how come we make so many “mistakes” if we are supposed to be “optimal” bayesian machines?
    Karl said that the key consideration is that as information travels across the layers [My addition: we assume here that conscious activity happens during the last steps], a lot of detail is lost. [My addition: this is the whole point of the machinery, you want to extract what is meaningful/important/relevant and discharge the sheer amount of unsurprising data]
    The discrepancy may arise from over-simplifying (removing too much information), I think this hinted the interesting observation that a tennis player may be monstrously accurate but still have no clue on how exactly he manages to “command” the ball. In other words (and here I’m re-elaborating what I understood), each layer may be optimally modelling the “level of abstraction” it is concerned with, but whether our “conscious” decisions are also optimal ultimately depends on what information is extracted on each layer and thus what finally reaches conscious processing. In other words, there is an across-layers optimality problem that is not necessarily solved by having each layer act in a locally optimal way.
    This then leads to my additional thoughts, but I guess a separate post will be more appropriate (it would be too long here, and I need some time to organise and double-check my thinking).

    As far as general comments go, I’m with you 100%: the event didn’t offer much for those already familiar with Bayesian brain/predictive coding/FEP ideas, but it was still enjoyable and the discussion was lively and entertaining. Plus, for me personally, it was a good occasion to allow some basic intuitions to take a more definite (and communicable) form, so not a waste of time in any way. Also: seeing a queue to get into such an event, and the number of people in the room (absolutely packed), was a pleasure in itself.

  2. What a great event – superb panel, really insightful discussion. Nice summary Micah.

    I thought I had a great idea there with the brain being able to manipulate the world to fit the model, rather than always updating the model to fit sensory inputs. Turns out Karl (and many others as well) have had this as a central part of their theory of the brain all along!

    Still, I do find the idea that your brain can initiate motor plans, such as moving a limb, simply by ‘modelling’ or imagining what the world would be like having made that movement, a really cool one. I guess this can be abstracted to higher order plans too… in fact, as I type, I think my brain may be modelling a world in which I become a cognitive neuroscience researcher!

    With regard to how we can have brains which work in a bayesian way, yet appear to be so far from ‘bayes optimal’ at a behavioural level, I liked Demis’ idea that this may be due to limitations in computational power – our brains apply heuristics because they need to work in real time. We could imagine a being which was able to make much more rational, bayes-optimal decisions, but it would probably be awfully slow! (cf. Kahneman’s ‘Thinking Fast and Slow’)

    I think Demis also mentioned that he doesn’t think over-fitting is a problem for our brains because they don’t take on too much data… or was it that over-fitting isn’t a problem for his deep network based AI because there isn’t too much data? (Although I’d have thought over-fitting becomes even more of a problem if your dataset is small)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s