ok here are the answers! meant to release them last night but was a bit delayed by sleep🙂
OK it is about 10pm here and I’ve got an HBM abstract to submit but given that the LSE wasn’t able to share the podcast, i’m just going to quickly summarize some of the major questions brought up either by the speakers or audience during the event.
For those that don’t know, the LSE hosted a brief event tonight exploring the question: “is the brain a predictive machine”, with panelists Paul Fletcher, Karl Friston, Demis Hassabis, Richard Holton and chaired by Benedetto De Martino. I enjoyed the event as it was about the right length and the discussion was lively. For those familiar with Bayesian brain/predictive coding/FEP there wasn’t much new information, but it was cool to see an outside audience react.
These were the principle questions that came up in the course of the event. Keep in mind these are just reproduced from my (fallible) memory:
- What does it mean if someone acts, thinks, or otherwise behaves irrationally/non-optimally. Can their brain still be Bayesian at a sub-personal level?
- There were a variety of answers to this question, with the most basic being that optimal behavior depends on ones prior, so someone with a mental disorder or poor behavior may be acting optimally with respect to their priors. Karl pointed out that that this means optimal behavior really is different for every organism and person, rendering the notion of optimal trivial.
- Instead of changing the model, is it possible for the brain to change the world so it fits with our model of it?
- Yes, Karl calls this active inference and it is a central part of his formulation of the Bayesian brain. Active inference allows you to either re-sample or adjust the world such that it fits with your model, and brings in a kind of strong embodiment to the Bayesian brain. This is because the kinds of actions (and perceptions) one can engage in are shaped by the body and internal states,
- Where do the priors come from?
- Again the answer from Karl – evolution. According to the FEP, organisms who survive do so in virtue of their ability to minimize free energy (prediction error). This means that for Karl evolution ‘just is the refinement and inheritance of our models of the world’; our brains reflect the structure of the world which is then passed on through natural selection and epigenetic mechanisms.
- Is the theory falsifiable and if so, what kind of data would disprove it?
- From Karl – ‘No. The theory is not falsifiable in the same sense that Natural Selection is not falsifiable’. At this there were some roars from the crowd and philosopher Richard Holton was asked how he felt about this statement. Richard said he would be very hesitant to endorse a theory that claimed to be non-falsifiable.
- Is it possible for the brain to ‘over-fit’ the world/sensory data?
- Yes, from Paul we heard that this is a good description of what happens in psychotic or other mental disorders, where an overly precise belief might resist any attempts to dislodge it or evidence to the contrary. This lead back into more discussion of what it means for an organism to behave in a way that is not ‘objectively optimal’.
- If we could make a Bayesian deep learning machine would it be conscious, and if so what rights should we give it?
- I didn’t quite catch Demis response to this as it was quite quick and there was a general laugh about these types of questions coming up.
- How exactly is the brain Bayesian? Does it follow a predictive coding, approximate, or variational Bayesian implementation?
- Here there was some interesting discussion from all sides, with Karl saying it may actually be a combination of these methods or via approximations we don’t yet understand. There was a lot of discussion about why Deep Brain doesn’t implement a Bayesian scheme in their networks, and it was revealed that it is because hierarchical Bayesian inference is currently too computationally demanding for such applications. Karl picked up on this point to say that the same is true of the human brain; the FEP outlines some general principles but we are still far from understanding how the brain actually approximates Bayesian inference.
- Can conscious beliefs, or decisions in the way we typically think of them, be thought of in a probabilistic way?’
- Karl: ‘Yes’
- Holton: Less sure
- Panel: this may call for multiple models, binary vs discrete, etc
- Karl redux: isn’t it interesting how now we are increasingly reshaping the world to better model our predictions, i.e. using external tools in place of memory, navigation, planning, etc (i.e. extended cognition)
There were other small bits of discussion, particularly concerning what it means for an agent to be optimal or not, and the relation of explicit/conscious states to a subpersonal Bayesian brain, but I’m afraid I can’t recall them in enough detail to accurately report them. Overall the discussion was interesting and lively, and I presume there will be some strong opinions about some of these. There was also a nice moment where Karl repeatedly said that the future of neuroscience was extended and enactive cognition. Some of the discussion between the panelist was quite interesting, particularly Paul’s views on mental disorders and Demis talking about why the brain might engage in long-term predictions and imagination (because collecting real data is expensive/dangerous).
Please write in the comments if I missed anything. I’d love to hear what everyone thinks about these. I’ve got my opinions particularly about the falsification question, but I’ll let others discuss before stating them.