The latest thoughts, musings, and data in cognitive science and neuroscience.

Depressing Quotes on Science Overflow – Reputation is the Gateway to Scientific Success

If you haven’t done so yet, go read this new E-life paper on scientific overflow, now. The authors interviewed “20 prominent principal investigators in the US, each with between 20 and 60 years of experience of basic biomedical research”, asking questions about how they view and deal with the exponential increase in scientific publications:

Our questions were grouped into four sections: (1) Have the scientists interviewed observed a decrease in the trustworthiness of science in their professional community and, if so, what are the main factors contributing to these perceptions? (2) How do the increasing concerns about the lack of robustness of scientific research affect trust in research? (3) What concerns do scientists have about science as a system? (4) What steps can be taken to ensure the trustworthiness of scientific research?

Some of the answers offer a strikingly sad view of the current state of the union:

On new open access journals, databases, etc:

There’s this proliferation of journals, a huge number of journals… and I tend not even to pay much attention to the work in some of these journals. (…) And you’re always asked to be an editor of some new journal. (…) I don’t pay much attention to them.

On the role of reputation in assessing scientific rigor and quality:

There are some people that I know to be really rigorous scientists whose work is consistently well done (…). If a paper came from a certain lab then I’m more likely to believe it than another paper that might have come from a different lab whose (…) head might be somebody that I know tends to cut corners, over-blows their conclusions, doesn’t do rigorous experiments, doesn’t appreciate the value of proper controls.

If I know that there’s a very well established laboratory with a great body of substantiated work behind it I think there is a human part of me that is inclined to expect that past quality will always be predicting future quality I think it’s a normal human thing. I try not to let that knee–jerk reaction be too strong though.

If I don’t know the authors then I will have to look more carefully at the data and (…) evaluate whether (…) I feel that the experiments were done the way I would have done them and whether there were some, if there are glaring omissions that then cast out the results (…) I mean [if] I don’t know anything I’ve never met the person or I don’t know their background, I don’t know where they trained (…) I’ve never had a discussion with them about science so I’ve never had an opportunity to gauge their level of rigour…

Another interviewee expressed scepticism about the rapid proliferation of new journals:

The journal that [a paper] is published in does make a difference to me, … I’m talking about (…) an open access journal that was started one year ago… along with five hundred other journals, (…) literally five hundred other journals, and that’s where it’s published, I have doubts about the quality of the peer review.

The cancer eating science is plain to see. If you don’t know the right people, your science is going to be viewed less favorably. If you don’t publish in the right journals, i’m not going to trust your science. It’s a massive self-feeding circle of power. The big rich labs will continue to get bigger and richer as their papers and grant applications will be treated preferentially. This massive mess of heuristic biases is turning academia into a straight up pyramid scheme. Of course, this is but a small sub-sample of the scientific community, but I can’t help but feel like these views represent a widespread opinion among the ‘old guard’ of science. Anecdotally these comments certainly mirror some of my own experiences. I’m curious to hear what others think.

The Bayesian Reproducibility Project


Fantastic post by Alexander Etz (@AlxEtz), which uses a Bayes Factor approach to summarise the results of the reproducibility project. Not only a great way to get a handle on those data but also a great introduction to Bayes Factors in general!

Originally posted on The Etz-Files:

The Reproducibility Project was finally published this week in Science, and an outpouring ofmedia articles followed. Headlines included “More Than 50% Psychology Studies Are Questionable: Study”, “Scientists Replicated 100 Psychology Studies, and Fewer Than Half Got the Same Results”, and “More than half of psychology papers are not reproducible”.

Are these categorical conclusions warranted? If you look at the paper, it makes very clear that the results do not definitively establish effects as true or false:

After this intensive effort to reproduce a sample of published psychological findings, how many of the effects have we established are true? Zero. And how many of the effects have we established are false? Zero. Is this a limitation of the project design? No. It is the reality of doing science, even if it is not appreciated in daily practice. (p. 7)

Very well said. The point of this project was not…

View original 3,933 more words

A walk in the park increases poor research practices and decreases reviewer critical thinking

Or so i’m going to claim because science is basically about making up whatever qualitative opinion you like and hard-selling it to a high impact journal right? Last night a paper appeared in PNAS early access entitled “Nature experience reduces rumination and subgenual prefrontal cortex activation” as a contributed submission. Like many of you I immediately felt my neurocringe brain area explode with activity as I began to smell the sickly sweet scent of gimmickry. Now I don’t have a lot of time so I was worried I wouldn’t be able to cover this paper in any detail. But never to worry, because the entire paper is literally two ANOVAs!

Don't think about it too much.

Look guys, we’re headed to PNAS! No, no, leave the critical thinking skills, we won’t be needing those where we’re going!

The paper begins with a lofty appeal to our naturalistic sensibilities; we’re increasingly living in urban areas, this trend is associated with poor mental health outcomes, and by golly-gee, shouldn’t we have a look at the brain to figure this all out? The authors set about testing their hypothesis by sending 19 people out into the remote wilderness of the Stanford University campus, or an urban setting:

The nature walk took place in a greenspace near Stanford University spanning an area ∼60 m northwest of Junipero Serra Boulevard and extending away from the street in a 5.3-km loop, including a significant stretch that is far (>1 km) from the sounds and sights of the surrounding residential area. As one proxy for urbanicity, we measured the proportion of impervious surface (e.g., asphalt, buildings, sidewalks) within 50 m of the center of the walking path (Fig. S4). Ten percent of the area within 50 m of the center of the path comprised of impervious surface (primarily of the asphalt path). Cumulative elevation gain of this walk was 155 m. The natural environment of the greenspace comprises open California grassland with scattered oaks and native shrubs, abundant birds, and occasional mammals (ground squirrels and deer). Views include neighboring, scenic hills, and distant views of the San Francisco Bay, and the southern portion of the Bay Area (including Palo Alto and Mountain View to the south, and Menlo Park and Atherton to the north). No automobiles, bicycles, or dogs are permitted on the path through the greenspace.

Wow, where can I sign up for this truly Kerouac-inspired bliss? The control group on the other hand had to survive the horrors of the palo-alto urban wasteland:

The urban walk took place on the busiest thoroughfare in nearby Palo Alto (El Camino Real), a street with three to four lanes in each direction and a steady stream of traffic. Participants were instructed to walk down one side of the street in a southeasterly direction for 2.65 km, before turning around at a specific point marked on a map. This spot was chosen as the midpoint of the walk for the urban walk to match the nature walk with respect to total distance and exercise. Participants were instructed to cross the street at a pedestrian crosswalk/stoplight, and return on the other side of the street (to simulate the loop component of the nature walk and greatly reduce repeated encounters with the same environmental stimuli on the return portion of the walk), for a total distance of 5.3 km; 76% of the area within 50mof the center of this section of El Camino was comprised of impervious surfaces (of roads and buildings) (Fig. S4). Cumulative elevation gain of this walk was 4 m. This stretch of road consists of a significant amount of noise from passing cars. Buildings are almost entirely single- to double-story units, primarily businesses (fast food establishments, cell phone stores, motels, etc.). Participants were instructed to remain on the sidewalk bordering the busy street and not to enter any buildings. Although this was the most urban area we could select for a walk that was a similar distance from the MRI facility as the nature walk, scattered trees were present on both sides of El Camino Real. Thus, our effects may represent a conservative estimate of effects of nature experience, as our urban group’s experience was not devoid of natural elements.

And they got that approved by the local ethics board? The horror!

The authors gave both groups a self-reported rumination questionnaire before and after the walk, and also acquired some arterial spin labeling MRIs. Here is where the real fun gets started – and basically ends – as the paper is almost entirely comprised of group by time ANOVAs on these two measures. I wish I could say I was suprised by what I found in the results:

That’s right folks – the key behavioral interaction of the paper – is non-significant. Measly. Minuscule. Forget about p-values for a second and consider the gall it takes to not only completely skim over this fact (nowhere in the paper is it mentioned) and head right to the delicious t-tests, but to egregiously promote this ‘finding’ in the title, abstract, and discussion as showing evidence for an effect of nature on rumination! Erroneous interaction for the win, at least with PNAS contributed submissions right?! The authors also analyzed the brain data in the same way – this time actually sticking with their NHST – and find that some brain area that has been previously related to some bad stuff showed reduced activity. And that – besides a heart rate and respiration control analyses – is it. No correlations with the (non-significant) behavior. Just pure and simple reverse inference piled on top of fallacious interpretation of a non-significant interaction. Never-mind the wonky and poorly operationalized research question!

See folks, high impact science is easy! Just have friends in the National Academy…

I’ll leave you with this gem from the methods:

“‘One participant was eliminated in analysis of self-reported rumination due to a decrease in rumination after nature experience that was 3 SDs below the mean.'”

That dude REALLY got his time’s worth from the walk. Or did the researchers maybe forget to check if anyone smoked a joint during their nature walk?

Are we watching a paradigm shift? 7 hot trends in cognitive neuroscience according to me


In the spirit of procrastination, here is a random list I made up of things that seem to be trending in cognitive neuroscience right now, with a quick description of each. These are purely pulled from the depths of speculation, so please do feel free to disagree. Most of these are not actually new concepts, it’s more about they way they are being used that makes them trendy areas.

7 hot trends in cognitive neuroscience according to me


Obviously oscillations have been around for a long time, but the rapid increase of technological sophistication for direct recordings (see for example high density cortical arrays and deep brain stimulation + recording) coupled with greater availability of MEG (plus rapid advance in MEG source reconstruction and analysis techniques) have placed large-scale neural oscillations at the forefront of cognitive neuroscience. Understanding how different frequency bands interact (e.g. phase coupling) has become a core topic of research in areas ranging from conscious awareness to memory and navigation.

Complex systems, dynamics, and emergence

Again, a concept as old as neuroscience itself, but this one seems to be piggy-backing on several trends towards a new resurgence. As neuroscience grows bored of blobology, and our analysis methods move increasingly towards modelling dynamical interactions (see above) and complex networks, our explanatory metaphors more frequently emphasize brain dynamics and emergent causation. This is a clear departure from the boxological approach that was so prevalent in the 80’s and 90’s.

Direct intervention and causal inference

Pseudo-invasive techniques like transcranial direct-current stimulation are on the rise, partially because they allow us to perform virtual lesion studies in ways not previously possible. Likewise, exponential growth of neurobiological and genetic techniques has ushered in the era of optogenetics, which allows direct manipulation of information processing at a single neuron level. Might this trend also reflect increased dissatisfaction with the correlational approaches that defined the last decade? You could also include steadily increasing interest in pharmacological neuroimaging under this category.

Computational modelling and reinforcement learning

With the hype surrounding Google’s £200 million acquisition of Deep Mind, and the recent Nobel Prize award for the discovery of grid cells, computational approaches to neuroscience are hotter than ever. Hardly a day goes by without a reinforcement learning or similar paper being published in a glossy high-impact journal. This one takes many forms but it is undeniable that model-based approaches to cognitive neuroscience are all the rage. There is also a clear surge of interest in the Bayesian Brain approach, which could almost have it’s own bullet point. But that would be too self serving  ;)

Gain control

Gain control is a very basic mechanism found throughout the central nervous system. It can be understood as the neuromodulatory weighting of post-synaptic excitability, and is thought to play a critical role in contextualizing neural processing. Gain control might for example allow a neuron that usually encodes a positive prediction error to ‘flip’ its sign to encode negative prediction error under a certain context. Gain is thought to be regulated via the global interaction of neural modulators (e.g. dopamine, acetylcholine) and links basic information theoretic processes with neurobiology. This makes it a particularly desirable tool for understanding everything from perceptual decision making to basic learning and the stabilization of oscillatory dynamics. Gain control thus links computational, biological, and systems level work and is likely to continue to attract a lot of attention in the near future.

Hierarchies that are not really hierarchies

Neuroscience loves its hierarchies. For example, the Van Essen model of how visual feature detection proceeds through a hierarchy of increasingly abstract functional processes is one of the core explanatory tools used to understand vision in the brain. Currently however there is a great deal of connectomic and functional work pointing out interesting ways in which global or feedback connections can re-route and modulate processes from the ‘top’ directly to the ‘bottom’ or vice versa. It’s worth noting this trend doesn’t do away with the old notions of hierarchies, but instead just renders them a bit more complex and circular. Put another way, it is currently quite trendy to show ‘the top is the bottom’ and ‘the bottom is the top’. This partially relates to the increased emphasis on emergence and complexity discussed above. A related trend is extension of what counts as the ‘bottom’, with low-level subcortical or even first order peripheral neurons suddenly being ascribed complex abilities typically reserved for cortical processes.

Primary sensations that are not so primary

Closely related to the previous point, there is a clear trend in the perceptual sciences of being increasingly liberal about how ‘primary’ sensory areas really are. I saw this first hand at last year’s Vision Sciences Society which featured at least a dozen posters showing how one could decode tactile shape from V1, or visual frequency from A1, and so on. Again this is probably related to the overall movement towards complexity and connectionism; as we lose our reliance on modularity, we’re suddenly open to a much more general role for core sensory areas.

Interestingly I didn’t include things like multi-modal or high resolution imaging as I think they are still actually emerging and have not quite fully arrived yet. But some of these – computational and connectomic modelling for example – are clearly part and parcel of contemporary zeitgeist. It’s also very interesting to look over this list, as there seems to be a clear trend towards complexity, connectionism, and dynamics. Are we witnessing a paradigm shift in the making? Or have we just forgotten all our first principles and started mangling any old thing we can get published? If it is a shift, what should we call it? Something like ‘computational connectionism’ comes to mind. Please feel free to add points or discuss in the comments!

Short post – my science fiction vision of how science could work in the future

6922_072dSadly I missed the recent #isScienceBroken event at UCL, which from all reports was a smashing success. At the moment i’m just terribly focused on finishing up a series of intensive behavioral studies plus (as always) minimizing my free energy, so it just wasn’t possible to make it. Still, a few were interested to hear my take on things. I’m not one to try and commentate an event I wasn’t at, so instead i’ll just wax poetic for a moment about the kind of Science future i’d like to live in. Note that this has all basically been written down in my self-published article on the subject, but it might bear a re-hash as it’s fun to think about. As before, this is mostly adapted from Clay Shirky’s sci-fi vision of a totally autonomous and self-organizing science.

Science – OF THE FUTURE!

Our scene opens in the not-too distant future, say the year 2030. A gradual but steady trend towards self-publication has lead to the emergence of a new dominant research culture, wherein the vast majority of data first appear as self-archived digital manuscripts containing data, code, and descriptive-yet-conservative interpretations on centrally maintained, publicly supported research archives, prior to publication in traditional journals. These data would be subject to fully open pre-and post-publication peer review focused solely on the technical merit and clarity of the paper.

Having published your data in a totally standardized and transparent format, you would then go on write something more similar to what we currently formulate for high impact journals. Short, punchy, light on gory data details and heavy on fantastical interpretations. This would be your space to really sell what you think makes those data great – or to defend them against a firestorm of critical community comments. These would be submitted to journals like Nature and Science who would have the strictly editorial role of evaluating cohesiveness, general interest, novelty, etc. In some cases, those journals and similar entities (for example, autonomous high-reputation peer reviewing cliques) would actively solicit authors to submit such papers based on the buzz (good or bad) that their archived data had already generated. In principle multiple publishers could solicit submissions from the same buzzworthy data, effectively competing to have your paper in their journal. In this model, publishers must actively seek out the most interesting papers, fulfilling their current editorial role without jeopardizing crucial quality control mechanisms.

Is this crazy? Maybe. To be honest I see some version of this story as almost inevitable. The key bits and players may change, but I truly believe a ‘push-to-repo’ style science is an inevitable future. The key is to realize that even journals like Nature and Science play an important if lauded role, taking on editorial risk to highlight the sexiest (and least probable) findings. The real question is who will become the key players in shaping our new information economy. Will today’s major publishers die as Blockbuster did – too tied into their own profit schemes to mobilize – or will they be Netflix, adapting to the beat of progress?  By segregating the quality and impact functions of publication, we’ll ultimately arrive at a far more efficient and effective science. The question is how, and when.

note: feel free to point out in the comments examples of how this is already becoming the case (some are already doing this). 30 years is a really, really conservative estimate :) 

UPDATED WITH ANSWERS – summary of the major questions [and answers] asked at #LSEbrain about the Bayesian Brain Hypothesis

ok here are the answers! meant to release them last night but was a bit delayed by sleep :)

OK it is about 10pm here and I’ve got an HBM abstract to submit but given that the LSE wasn’t able to share the podcast, i’m just going to quickly summarize some of the major questions brought up either by the speakers or audience during the event.

For those that don’t know, the LSE hosted a brief event tonight exploring the question: “is the brain a predictive machine”, with panelists Paul Fletcher, Karl Friston, Demis Hassabis, Richard Holton and chaired by Benedetto De Martino. I enjoyed the event as it was about the right length and the discussion was lively. For those familiar with Bayesian brain/predictive coding/FEP there wasn’t much new information, but it was cool to see an outside audience react.

These were the principle questions that came up in the course of the event. Keep in mind these are just reproduced from my (fallible) memory:

  • What does it mean if someone acts, thinks, or otherwise behaves irrationally/non-optimally. Can their brain still be Bayesian at a sub-personal level?
    • There were a variety of answers to this question, with the most basic being that optimal behavior depends on ones prior, so someone with a mental disorder or poor behavior may be acting optimally with respect to their priors. Karl pointed out that that this means optimal behavior really is different for every organism and person, rendering the notion of optimal trivial.
  • Instead of changing the model, is it possible for the brain to change the world so it fits with our model of it?
    • Yes, Karl calls this active inference and it is a central part of his formulation of the Bayesian brain. Active inference allows you to either re-sample or adjust the world such that it fits with your model, and brings in a kind of strong embodiment to the Bayesian brain. This is because the kinds of actions  (and perceptions) one can engage in are shaped by the body and internal states,
  • Where do the priors come from?
    • Again the answer from Karl – evolution. According to the FEP, organisms who survive do so in virtue of their ability to minimize free energy (prediction error). This means that for Karl evolution ‘just is the refinement and inheritance of our models of the world’; our brains reflect the structure of the world which is then passed on through natural selection and epigenetic mechanisms.
  • Is the theory falsifiable and if so, what kind of data would disprove it?
    • From Karl – ‘No. The theory is not falsifiable in the same sense that Natural Selection is not falsifiable’. At this there were some roars from the crowd and philosopher Richard Holton was asked how he felt about this statement. Richard said he would be very hesitant to endorse a theory that claimed to be non-falsifiable.
  • Is it possible for the brain to ‘over-fit’ the world/sensory data?
    • Yes, from Paul we heard that this is a good description of what happens in psychotic or other mental disorders, where an overly precise belief might resist any attempts to dislodge it or evidence to the contrary. This lead back into more discussion of what it means for an organism to behave in a way that is not ‘objectively optimal’.
  • If we could make a Bayesian deep learning machine would it be conscious, and if so what rights should we give it?
    • I didn’t quite catch Demis response to this as it was quite quick and there was a general laugh about these types of questions coming up.
  • How exactly is the brain Bayesian? Does it follow a predictive coding, approximate, or variational Bayesian implementation?
    • Here there was some interesting discussion from all sides, with Karl saying it may actually be a combination of these methods or via approximations we don’t yet understand. There was a lot of discussion about why Deep Brain doesn’t implement a Bayesian scheme in their networks, and it was revealed that it is because hierarchical Bayesian inference is currently too computationally demanding for such applications. Karl picked up on this point to say that the same is true of the human brain; the FEP outlines some general principles but we are still far from understanding how the brain actually approximates Bayesian inference.
  • Can conscious beliefs, or decisions in the way we typically think of them, be thought of in a probabilistic way?’
    • Karl: ‘Yes’
    • Holton: Less sure
    • Panel: this may call for multiple models, binary vs discrete, etc
    • Karl redux: isn’t it interesting how now we are increasingly reshaping the world to better model our predictions, i.e. using external tools in place of memory, navigation, planning, etc (i.e. extended cognition)

There were other small bits of discussion, particularly concerning what it means for an agent to be optimal or not, and the relation of explicit/conscious states to a subpersonal Bayesian brain, but I’m afraid I can’t recall them in enough detail to accurately report them. Overall the discussion was interesting and lively, and I presume there will be some strong opinions about some of these. There was also a nice moment where Karl repeatedly said that the future of neuroscience was extended and enactive cognition. Some of the discussion between the panelist was quite interesting, particularly Paul’s views on mental disorders and Demis talking about why the brain might engage in long-term predictions and imagination (because collecting real data is expensive/dangerous).

Please write in the comments if I missed anything. I’d love to hear what everyone thinks about these. I’ve got my opinions particularly about the falsification question, but I’ll let others discuss before stating them.

[VIDEO] Mind-wandering, meta-cognition, and the function of consciousness

Hey everyone! I recently did an interview for Neuro.TV covering some of my past and current research on mind-wandering, meta-cognition, and conscious awareness. The discussion is very long and covers quite a diversity of topics, so I thought i’d give a little overview here with links to specific times.

For the first 15 minutes, we focus on general research in meta-cognition, and topics like the functional and evolutionary signifigance of metacognition:

We then begin to move onto specific discussion about mind-wandering, around 16:00:

I like our discussion as we quickly get beyond the overly simplistic idea of ‘mind-wandering’ as just attentional failure, reviewing the many ways in which it can drive or support meta-cognitive awareness. We also of course briefly discuss the ‘default mode network’ and the (misleading) idea that there are ‘task positive’ and ‘task negative’ networks in the brain, around 19:00:

Lots of interesting discussion there, in which I try to roughly synthesize some of the overlap and ambiguity between mind-wandering, meta-cognition, and their neural correlates.

Around 36:00 we start discussing my experiment on mind-wandering variability and error awareness:

A great experience in all, and hopefully an interesting video for some! Be sure to support the kickstarter for the next season of Neuro.TV!

JF also has a detailed annotation on the brainfacts blog for the episode:

“0:07″ Introduction
“0:50″ What is cognition?
“4:45″ Metacognition and its relation to confidence.
“10:49″ What is the difference between cognition and metacognition?
“14:07″ Confidence in our memories; does it qualify as metacognition?
“18:34″ Technical challenges in studying mind-wandering scientifically and related brain areas.
“25:00″ Overlap between the brain regions involved in social interactions and those known as the default-mode network.
“29:17″ Why does cognition evolve?
“35:51″ Task-unrelated thoughts and errors in performance.
“50:53″ Tricks to focus on tasks while allowing some amount of mind-wandering.

What’s the causal link dissociating insula responses to salience and bodily arousal?

Just reading this new paper by Lucina Uddin and felt like a quick post. It is a nice review of one of my favorite brain networks, the ever present insular cortex and ‘salience network’ (thalamus, AIC, MCC). As we all know AIC activation is one of the most ubiquitous in our field and generally shows up in everything. Uddin advances the well-supported idea that in addition to being sensitive to visceral, autonomic, bodily states (and also having a causal influence on them), the network responds generally to salient stimuli (like oddballs) across all sensory modalities. We already knew this but a thought leaped to my mind; what is the order of causation here? If the AIC responds to and causes arousal spikes, are oddball responses driven by the novelty of the stimuli or by a first order evoked response in the body? Your brainstem, spinal cord, and PNS are fully capable of creating visceral responses to unexpected stimuli. How can we dissociate ‘dry’ oddball responses from evoked physiological responses? It seems likely that arousal spikes accompany anything unexpected and that salience itself doesn’t really dissociate AIC responses from a more general role of bodily awareness. Recent studies show that oddballs evoke pupil dilation, which is related to arousal.

Check out this figure:


Clearly AIC and ACC not only receive physiological input but also can directly cause phsyio outputs. I’m immediately reminded of an excellent review by Markus Ullsperger and colleagues, where they run into a similar issue trying to work out how arousal cues contribute to conscious error awareness. Ultimately Ullsperger et al conclude that we can’t really dissociate whether arousal cues cause error awareness or error-awareness causes arousal spikes. This seems to also be true for a general salience account.


How can we tease these apart? It seems like we’d need to somehow both knock out and cause physiological responses during the presence and absence of salient stimuli. I’m not sure how we could do this – maybe de-afferentiated patients could get us part of the way there. But a larger problem looms also: the majority of findings cited by Uddin (and to a lesser extent Ullsperger) come from fMRI. Indeed, the original Seeley et al “salience network” paper (one of the top 10 most cited papers in neuroscience) and the original Critchley insula-interoception papers (also a top ten paper) is based on fMRI. Given that these areas are also heavily contaminated by pulse and respiration artifacts, how can we work out the causal loop between salience/perception and arousal? If a salient cue causes a pulse spike then it might also cause a corresponding BOLD artifact. It might be that there is a particularly non-artefactual relationship between salient things and arousal but currently we can’t seem to work out the direction of causation. Worse, it is possible the process driving the artifacts themselves are crucial for ‘salience’ computation, which would mean physio-correction would obscure these important relationships! A tough cookie indeed. Lastly, we’ll need to go beyond the somewhat psychological label of ‘salience’ if we really want to work out these issues. For my money, I think an account based on expected precision fits nicely with the pattern of results we see in these areas, providing a computational mechanism for ‘salience’.

In the end I suspect this is going be one for the direct recording people to solve. If you’ve got access to insula implantees, let me know! :D

Note: folks on twitter said they’d like to see more of the cuff posts – here you go! This post was written in a flurry of thought in about 30 minutes, so please excuse any snarfs! 

Twitter recommends essential reading in pupilometry

Not sure why WordPress is refusing to accept the Storify embed, but click here for some excellent suggestions on reading in pupilometry:

Top 200 terms in cognitive neuroscience according to neurosynth

Tonight I was playing around with some of the top features in neurosynth (the searchable terms with the highest number of studies containing that term). You can find the list here, just sort by the number of studies. I excluded the top 3 terms which are boring (e.g. “image”, “response”, and “time”)  and whose extremely high weights would mess up the wordle. I then created a word-cloud weighted so that the size reflects the number of studies for each term.

Here are the top 200 terms sized according to number times reported in neurosynth’s 5809 indexed fMRI studies:


Pretty neat! These are the 200 terms the neurosynth database has the most information on, and is a pretty good overview of key concepts and topics in our field! I am sure there is something useful for everyone in there :D

Direct link to the wordle:

Wordle: neurosynth


Get every new post delivered to your Inbox.

Join 15,915 other followers

%d bloggers like this: