Neuroconscience

The latest thoughts, musings, and data in cognitive science and neuroscience.

A walk in the park increases poor research practices and decreases reviewer critical thinking

Or so i’m going to claim because science is basically about making up whatever qualitative opinion you like and hard-selling it to a high impact journal right? Last night a paper appeared in PNAS early access entitled “Nature experience reduces rumination and subgenual prefrontal cortex activation” as a contributed submission. Like many of you I immediately felt my neurocringe brain area explode with activity as I began to smell the sickly sweet scent of gimmickry. Now I don’t have a lot of time so I was worried I wouldn’t be able to cover this paper in any detail. But never to worry, because the entire paper is literally two ANOVAs!

Don't think about it too much.

Look guys, we’re headed to PNAS! No, no, leave the critical thinking skills, we won’t be needing those where we’re going!

The paper begins with a lofty appeal to our naturalistic sensibilities; we’re increasingly living in urban areas, this trend is associated with poor mental health outcomes, and by golly-gee, shouldn’t we have a look at the brain to figure this all out? The authors set about testing their hypothesis by sending 19 people out into the remote wilderness of the Stanford University campus, or an urban setting:

The nature walk took place in a greenspace near Stanford University spanning an area ∼60 m northwest of Junipero Serra Boulevard and extending away from the street in a 5.3-km loop, including a significant stretch that is far (>1 km) from the sounds and sights of the surrounding residential area. As one proxy for urbanicity, we measured the proportion of impervious surface (e.g., asphalt, buildings, sidewalks) within 50 m of the center of the walking path (Fig. S4). Ten percent of the area within 50 m of the center of the path comprised of impervious surface (primarily of the asphalt path). Cumulative elevation gain of this walk was 155 m. The natural environment of the greenspace comprises open California grassland with scattered oaks and native shrubs, abundant birds, and occasional mammals (ground squirrels and deer). Views include neighboring, scenic hills, and distant views of the San Francisco Bay, and the southern portion of the Bay Area (including Palo Alto and Mountain View to the south, and Menlo Park and Atherton to the north). No automobiles, bicycles, or dogs are permitted on the path through the greenspace.

Wow, where can I sign up for this truly Kerouac-inspired bliss? The control group on the other hand had to survive the horrors of the palo-alto urban wasteland:

The urban walk took place on the busiest thoroughfare in nearby Palo Alto (El Camino Real), a street with three to four lanes in each direction and a steady stream of traffic. Participants were instructed to walk down one side of the street in a southeasterly direction for 2.65 km, before turning around at a specific point marked on a map. This spot was chosen as the midpoint of the walk for the urban walk to match the nature walk with respect to total distance and exercise. Participants were instructed to cross the street at a pedestrian crosswalk/stoplight, and return on the other side of the street (to simulate the loop component of the nature walk and greatly reduce repeated encounters with the same environmental stimuli on the return portion of the walk), for a total distance of 5.3 km; 76% of the area within 50mof the center of this section of El Camino was comprised of impervious surfaces (of roads and buildings) (Fig. S4). Cumulative elevation gain of this walk was 4 m. This stretch of road consists of a significant amount of noise from passing cars. Buildings are almost entirely single- to double-story units, primarily businesses (fast food establishments, cell phone stores, motels, etc.). Participants were instructed to remain on the sidewalk bordering the busy street and not to enter any buildings. Although this was the most urban area we could select for a walk that was a similar distance from the MRI facility as the nature walk, scattered trees were present on both sides of El Camino Real. Thus, our effects may represent a conservative estimate of effects of nature experience, as our urban group’s experience was not devoid of natural elements.

And they got that approved by the local ethics board? The horror!

The authors gave both groups a self-reported rumination questionnaire before and after the walk, and also acquired some arterial spin labeling MRIs. Here is where the real fun gets started – and basically ends – as the paper is almost entirely comprised of group by time ANOVAs on these two measures. I wish I could say I was suprised by what I found in the results:
whattf

That’s right folks – the key behavioral interaction of the paper – is non-significant. Measly. Minuscule. Forget about p-values for a second and consider the gall it takes to not only completely skim over this fact (nowhere in the paper is it mentioned) and head right to the delicious t-tests, but to egregiously promote this ‘finding’ in the title, abstract, and discussion as showing evidence for an effect of nature on rumination! Erroneous interaction for the win, at least with PNAS contributed submissions right?! The authors also analyzed the brain data in the same way – this time actually sticking with their NHST – and find that some brain area that has been previously related to some bad stuff showed reduced activity. And that – besides a heart rate and respiration control analyses – is it. No correlations with the (non-significant) behavior. Just pure and simple reverse inference piled on top of fallacious interpretation of a non-significant interaction. Never-mind the wonky and poorly operationalized research question!

See folks, high impact science is easy! Just have friends in the National Academy…

I’ll leave you with this gem from the methods:

“‘One participant was eliminated in analysis of self-reported rumination due to a decrease in rumination after nature experience that was 3 SDs below the mean.'”

That dude REALLY got his time’s worth from the walk. Or did the researchers maybe forget to check if anyone smoked a joint during their nature walk?

Are we watching a paradigm shift? 7 hot trends in cognitive neuroscience according to me

brainonfire

In the spirit of procrastination, here is a random list I made up of things that seem to be trending in cognitive neuroscience right now, with a quick description of each. These are purely pulled from the depths of speculation, so please do feel free to disagree. Most of these are not actually new concepts, it’s more about they way they are being used that makes them trendy areas.


7 hot trends in cognitive neuroscience according to me

Oscillations

Obviously oscillations have been around for a long time, but the rapid increase of technological sophistication for direct recordings (see for example high density cortical arrays and deep brain stimulation + recording) coupled with greater availability of MEG (plus rapid advance in MEG source reconstruction and analysis techniques) have placed large-scale neural oscillations at the forefront of cognitive neuroscience. Understanding how different frequency bands interact (e.g. phase coupling) has become a core topic of research in areas ranging from conscious awareness to memory and navigation.

Complex systems, dynamics, and emergence

Again, a concept as old as neuroscience itself, but this one seems to be piggy-backing on several trends towards a new resurgence. As neuroscience grows bored of blobology, and our analysis methods move increasingly towards modelling dynamical interactions (see above) and complex networks, our explanatory metaphors more frequently emphasize brain dynamics and emergent causation. This is a clear departure from the boxological approach that was so prevalent in the 80’s and 90’s.

Direct intervention and causal inference

Pseudo-invasive techniques like transcranial direct-current stimulation are on the rise, partially because they allow us to perform virtual lesion studies in ways not previously possible. Likewise, exponential growth of neurobiological and genetic techniques has ushered in the era of optogenetics, which allows direct manipulation of information processing at a single neuron level. Might this trend also reflect increased dissatisfaction with the correlational approaches that defined the last decade? You could also include steadily increasing interest in pharmacological neuroimaging under this category.

Computational modelling and reinforcement learning

With the hype surrounding Google’s £200 million acquisition of Deep Mind, and the recent Nobel Prize award for the discovery of grid cells, computational approaches to neuroscience are hotter than ever. Hardly a day goes by without a reinforcement learning or similar paper being published in a glossy high-impact journal. This one takes many forms but it is undeniable that model-based approaches to cognitive neuroscience are all the rage. There is also a clear surge of interest in the Bayesian Brain approach, which could almost have it’s own bullet point. But that would be too self serving  ;)

Gain control

Gain control is a very basic mechanism found throughout the central nervous system. It can be understood as the neuromodulatory weighting of post-synaptic excitability, and is thought to play a critical role in contextualizing neural processing. Gain control might for example allow a neuron that usually encodes a positive prediction error to ‘flip’ its sign to encode negative prediction error under a certain context. Gain is thought to be regulated via the global interaction of neural modulators (e.g. dopamine, acetylcholine) and links basic information theoretic processes with neurobiology. This makes it a particularly desirable tool for understanding everything from perceptual decision making to basic learning and the stabilization of oscillatory dynamics. Gain control thus links computational, biological, and systems level work and is likely to continue to attract a lot of attention in the near future.

Hierarchies that are not really hierarchies

Neuroscience loves its hierarchies. For example, the Van Essen model of how visual feature detection proceeds through a hierarchy of increasingly abstract functional processes is one of the core explanatory tools used to understand vision in the brain. Currently however there is a great deal of connectomic and functional work pointing out interesting ways in which global or feedback connections can re-route and modulate processes from the ‘top’ directly to the ‘bottom’ or vice versa. It’s worth noting this trend doesn’t do away with the old notions of hierarchies, but instead just renders them a bit more complex and circular. Put another way, it is currently quite trendy to show ‘the top is the bottom’ and ‘the bottom is the top’. This partially relates to the increased emphasis on emergence and complexity discussed above. A related trend is extension of what counts as the ‘bottom’, with low-level subcortical or even first order peripheral neurons suddenly being ascribed complex abilities typically reserved for cortical processes.

Primary sensations that are not so primary

Closely related to the previous point, there is a clear trend in the perceptual sciences of being increasingly liberal about how ‘primary’ sensory areas really are. I saw this first hand at last year’s Vision Sciences Society which featured at least a dozen posters showing how one could decode tactile shape from V1, or visual frequency from A1, and so on. Again this is probably related to the overall movement towards complexity and connectionism; as we lose our reliance on modularity, we’re suddenly open to a much more general role for core sensory areas.


Interestingly I didn’t include things like multi-modal or high resolution imaging as I think they are still actually emerging and have not quite fully arrived yet. But some of these – computational and connectomic modelling for example – are clearly part and parcel of contemporary zeitgeist. It’s also very interesting to look over this list, as there seems to be a clear trend towards complexity, connectionism, and dynamics. Are we witnessing a paradigm shift in the making? Or have we just forgotten all our first principles and started mangling any old thing we can get published? If it is a shift, what should we call it? Something like ‘computational connectionism’ comes to mind. Please feel free to add points or discuss in the comments!

Short post – my science fiction vision of how science could work in the future

6922_072dSadly I missed the recent #isScienceBroken event at UCL, which from all reports was a smashing success. At the moment i’m just terribly focused on finishing up a series of intensive behavioral studies plus (as always) minimizing my free energy, so it just wasn’t possible to make it. Still, a few were interested to hear my take on things. I’m not one to try and commentate an event I wasn’t at, so instead i’ll just wax poetic for a moment about the kind of Science future i’d like to live in. Note that this has all basically been written down in my self-published article on the subject, but it might bear a re-hash as it’s fun to think about. As before, this is mostly adapted from Clay Shirky’s sci-fi vision of a totally autonomous and self-organizing science.

Science – OF THE FUTURE!

Our scene opens in the not-too distant future, say the year 2030. A gradual but steady trend towards self-publication has lead to the emergence of a new dominant research culture, wherein the vast majority of data first appear as self-archived digital manuscripts containing data, code, and descriptive-yet-conservative interpretations on centrally maintained, publicly supported research archives, prior to publication in traditional journals. These data would be subject to fully open pre-and post-publication peer review focused solely on the technical merit and clarity of the paper.

Having published your data in a totally standardized and transparent format, you would then go on write something more similar to what we currently formulate for high impact journals. Short, punchy, light on gory data details and heavy on fantastical interpretations. This would be your space to really sell what you think makes those data great – or to defend them against a firestorm of critical community comments. These would be submitted to journals like Nature and Science who would have the strictly editorial role of evaluating cohesiveness, general interest, novelty, etc. In some cases, those journals and similar entities (for example, autonomous high-reputation peer reviewing cliques) would actively solicit authors to submit such papers based on the buzz (good or bad) that their archived data had already generated. In principle multiple publishers could solicit submissions from the same buzzworthy data, effectively competing to have your paper in their journal. In this model, publishers must actively seek out the most interesting papers, fulfilling their current editorial role without jeopardizing crucial quality control mechanisms.

Is this crazy? Maybe. To be honest I see some version of this story as almost inevitable. The key bits and players may change, but I truly believe a ‘push-to-repo’ style science is an inevitable future. The key is to realize that even journals like Nature and Science play an important if lauded role, taking on editorial risk to highlight the sexiest (and least probable) findings. The real question is who will become the key players in shaping our new information economy. Will today’s major publishers die as Blockbuster did – too tied into their own profit schemes to mobilize – or will they be Netflix, adapting to the beat of progress?  By segregating the quality and impact functions of publication, we’ll ultimately arrive at a far more efficient and effective science. The question is how, and when.

note: feel free to point out in the comments examples of how this is already becoming the case (some are already doing this). 30 years is a really, really conservative estimate :) 

UPDATED WITH ANSWERS – summary of the major questions [and answers] asked at #LSEbrain about the Bayesian Brain Hypothesis

ok here are the answers! meant to release them last night but was a bit delayed by sleep :)

OK it is about 10pm here and I’ve got an HBM abstract to submit but given that the LSE wasn’t able to share the podcast, i’m just going to quickly summarize some of the major questions brought up either by the speakers or audience during the event.

For those that don’t know, the LSE hosted a brief event tonight exploring the question: “is the brain a predictive machine”, with panelists Paul Fletcher, Karl Friston, Demis Hassabis, Richard Holton and chaired by Benedetto De Martino. I enjoyed the event as it was about the right length and the discussion was lively. For those familiar with Bayesian brain/predictive coding/FEP there wasn’t much new information, but it was cool to see an outside audience react.

These were the principle questions that came up in the course of the event. Keep in mind these are just reproduced from my (fallible) memory:

  • What does it mean if someone acts, thinks, or otherwise behaves irrationally/non-optimally. Can their brain still be Bayesian at a sub-personal level?
    • There were a variety of answers to this question, with the most basic being that optimal behavior depends on ones prior, so someone with a mental disorder or poor behavior may be acting optimally with respect to their priors. Karl pointed out that that this means optimal behavior really is different for every organism and person, rendering the notion of optimal trivial.
  • Instead of changing the model, is it possible for the brain to change the world so it fits with our model of it?
    • Yes, Karl calls this active inference and it is a central part of his formulation of the Bayesian brain. Active inference allows you to either re-sample or adjust the world such that it fits with your model, and brings in a kind of strong embodiment to the Bayesian brain. This is because the kinds of actions  (and perceptions) one can engage in are shaped by the body and internal states,
  • Where do the priors come from?
    • Again the answer from Karl – evolution. According to the FEP, organisms who survive do so in virtue of their ability to minimize free energy (prediction error). This means that for Karl evolution ‘just is the refinement and inheritance of our models of the world'; our brains reflect the structure of the world which is then passed on through natural selection and epigenetic mechanisms.
  • Is the theory falsifiable and if so, what kind of data would disprove it?
    • From Karl – ‘No. The theory is not falsifiable in the same sense that Natural Selection is not falsifiable’. At this there were some roars from the crowd and philosopher Richard Holton was asked how he felt about this statement. Richard said he would be very hesitant to endorse a theory that claimed to be non-falsifiable.
  • Is it possible for the brain to ‘over-fit’ the world/sensory data?
    • Yes, from Paul we heard that this is a good description of what happens in psychotic or other mental disorders, where an overly precise belief might resist any attempts to dislodge it or evidence to the contrary. This lead back into more discussion of what it means for an organism to behave in a way that is not ‘objectively optimal’.
  • If we could make a Bayesian deep learning machine would it be conscious, and if so what rights should we give it?
    • I didn’t quite catch Demis response to this as it was quite quick and there was a general laugh about these types of questions coming up.
  • How exactly is the brain Bayesian? Does it follow a predictive coding, approximate, or variational Bayesian implementation?
    • Here there was some interesting discussion from all sides, with Karl saying it may actually be a combination of these methods or via approximations we don’t yet understand. There was a lot of discussion about why Deep Brain doesn’t implement a Bayesian scheme in their networks, and it was revealed that it is because hierarchical Bayesian inference is currently too computationally demanding for such applications. Karl picked up on this point to say that the same is true of the human brain; the FEP outlines some general principles but we are still far from understanding how the brain actually approximates Bayesian inference.
  • Can conscious beliefs, or decisions in the way we typically think of them, be thought of in a probabilistic way?’
    • Karl: ‘Yes’
    • Holton: Less sure
    • Panel: this may call for multiple models, binary vs discrete, etc
    • Karl redux: isn’t it interesting how now we are increasingly reshaping the world to better model our predictions, i.e. using external tools in place of memory, navigation, planning, etc (i.e. extended cognition)

There were other small bits of discussion, particularly concerning what it means for an agent to be optimal or not, and the relation of explicit/conscious states to a subpersonal Bayesian brain, but I’m afraid I can’t recall them in enough detail to accurately report them. Overall the discussion was interesting and lively, and I presume there will be some strong opinions about some of these. There was also a nice moment where Karl repeatedly said that the future of neuroscience was extended and enactive cognition. Some of the discussion between the panelist was quite interesting, particularly Paul’s views on mental disorders and Demis talking about why the brain might engage in long-term predictions and imagination (because collecting real data is expensive/dangerous).

Please write in the comments if I missed anything. I’d love to hear what everyone thinks about these. I’ve got my opinions particularly about the falsification question, but I’ll let others discuss before stating them.

[VIDEO] Mind-wandering, meta-cognition, and the function of consciousness

Hey everyone! I recently did an interview for Neuro.TV covering some of my past and current research on mind-wandering, meta-cognition, and conscious awareness. The discussion is very long and covers quite a diversity of topics, so I thought i’d give a little overview here with links to specific times.

For the first 15 minutes, we focus on general research in meta-cognition, and topics like the functional and evolutionary signifigance of metacognition:

We then begin to move onto specific discussion about mind-wandering, around 16:00:

I like our discussion as we quickly get beyond the overly simplistic idea of ‘mind-wandering’ as just attentional failure, reviewing the many ways in which it can drive or support meta-cognitive awareness. We also of course briefly discuss the ‘default mode network’ and the (misleading) idea that there are ‘task positive’ and ‘task negative’ networks in the brain, around 19:00:

Lots of interesting discussion there, in which I try to roughly synthesize some of the overlap and ambiguity between mind-wandering, meta-cognition, and their neural correlates.

Around 36:00 we start discussing my experiment on mind-wandering variability and error awareness:

A great experience in all, and hopefully an interesting video for some! Be sure to support the kickstarter for the next season of Neuro.TV!

JF also has a detailed annotation on the brainfacts blog for the episode:

“0:07″ Introduction
“0:50″ What is cognition?
“4:45″ Metacognition and its relation to confidence.
“10:49″ What is the difference between cognition and metacognition?
“14:07″ Confidence in our memories; does it qualify as metacognition?
“18:34″ Technical challenges in studying mind-wandering scientifically and related brain areas.
“25:00″ Overlap between the brain regions involved in social interactions and those known as the default-mode network.
“29:17″ Why does cognition evolve?
“35:51″ Task-unrelated thoughts and errors in performance.
“50:53″ Tricks to focus on tasks while allowing some amount of mind-wandering.

What’s the causal link dissociating insula responses to salience and bodily arousal?

Just reading this new paper by Lucina Uddin and felt like a quick post. It is a nice review of one of my favorite brain networks, the ever present insular cortex and ‘salience network’ (thalamus, AIC, MCC). As we all know AIC activation is one of the most ubiquitous in our field and generally shows up in everything. Uddin advances the well-supported idea that in addition to being sensitive to visceral, autonomic, bodily states (and also having a causal influence on them), the network responds generally to salient stimuli (like oddballs) across all sensory modalities. We already knew this but a thought leaped to my mind; what is the order of causation here? If the AIC responds to and causes arousal spikes, are oddball responses driven by the novelty of the stimuli or by a first order evoked response in the body? Your brainstem, spinal cord, and PNS are fully capable of creating visceral responses to unexpected stimuli. How can we dissociate ‘dry’ oddball responses from evoked physiological responses? It seems likely that arousal spikes accompany anything unexpected and that salience itself doesn’t really dissociate AIC responses from a more general role of bodily awareness. Recent studies show that oddballs evoke pupil dilation, which is related to arousal.

Check out this figure:

fig1

Clearly AIC and ACC not only receive physiological input but also can directly cause phsyio outputs. I’m immediately reminded of an excellent review by Markus Ullsperger and colleagues, where they run into a similar issue trying to work out how arousal cues contribute to conscious error awareness. Ultimately Ullsperger et al conclude that we can’t really dissociate whether arousal cues cause error awareness or error-awareness causes arousal spikes. This seems to also be true for a general salience account.

ulls

How can we tease these apart? It seems like we’d need to somehow both knock out and cause physiological responses during the presence and absence of salient stimuli. I’m not sure how we could do this – maybe de-afferentiated patients could get us part of the way there. But a larger problem looms also: the majority of findings cited by Uddin (and to a lesser extent Ullsperger) come from fMRI. Indeed, the original Seeley et al “salience network” paper (one of the top 10 most cited papers in neuroscience) and the original Critchley insula-interoception papers (also a top ten paper) is based on fMRI. Given that these areas are also heavily contaminated by pulse and respiration artifacts, how can we work out the causal loop between salience/perception and arousal? If a salient cue causes a pulse spike then it might also cause a corresponding BOLD artifact. It might be that there is a particularly non-artefactual relationship between salient things and arousal but currently we can’t seem to work out the direction of causation. Worse, it is possible the process driving the artifacts themselves are crucial for ‘salience’ computation, which would mean physio-correction would obscure these important relationships! A tough cookie indeed. Lastly, we’ll need to go beyond the somewhat psychological label of ‘salience’ if we really want to work out these issues. For my money, I think an account based on expected precision fits nicely with the pattern of results we see in these areas, providing a computational mechanism for ‘salience’.

In the end I suspect this is going be one for the direct recording people to solve. If you’ve got access to insula implantees, let me know! :D

Note: folks on twitter said they’d like to see more of the cuff posts – here you go! This post was written in a flurry of thought in about 30 minutes, so please excuse any snarfs! 

Twitter recommends essential reading in pupilometry

Not sure why WordPress is refusing to accept the Storify embed, but click here for some excellent suggestions on reading in pupilometry:

https://storify.com/neuroconscience/twitter-recommendations-for-essential-reading-in-p

Top 200 terms in cognitive neuroscience according to neurosynth

Tonight I was playing around with some of the top features in neurosynth (the searchable terms with the highest number of studies containing that term). You can find the list here, just sort by the number of studies. I excluded the top 3 terms which are boring (e.g. “image”, “response”, and “time”)  and whose extremely high weights would mess up the wordle. I then created a word-cloud weighted so that the size reflects the number of studies for each term.

Here are the top 200 terms sized according to number times reported in neurosynth’s 5809 indexed fMRI studies:

wordle

Pretty neat! These are the 200 terms the neurosynth database has the most information on, and is a pretty good overview of key concepts and topics in our field! I am sure there is something useful for everyone in there :D

Direct link to the wordle:

Wordle: neurosynth

Neurovault: a must-use tool for every neuroimaging paper!

Something that has long irked me about cognitive neuroscience is the way we share our data. I still remember the very first time I opened a brain imaging paper and was struck dumbfounded by the practice of listing activation results in endless p-value tables and selective 2D snapshots. How could anyone make sense of data this way? Now having several years experience creating such papers, I am only more dumbfounded that we continue to present our data in this way. What purpose can be served by taking a beautiful 3-dimensional result and filtering it through an awkward foci ‘photoshoot’? While there are some standards you can use to improve the 2D presentation of 3D brain maps, for example showing only peak activation and including glass-brains, this is an imperfect solution – ultimately the best way to assess the topology of a result is by directly examining the full 3D result.

Just imagine how improved every fMRI paper would be, if instead of a 20+ row table and selective snapshot, results were displayed in a simple 3D viewing widget right in the paper? Readers could assess the underlying effects at whatever statistical threshold they feel is most appropriate, and PDF versions could be printed at a particular coordinate and threshold specified by the author. Reviewers and readers alike could get a much fuller idea of the data, and meta-analysis would be vastly improved by the extensive uploading of well-categorized contrast images. More-over, all this can be easily achieved while excluding worries about privacy and intellectual property, using only group-level contrast images, which are inherently without identifying features and contain only those effects included in the published manuscript!

Now imagine my surprise when I learned that thanks to Chris Gorgolewksi and colleagues, all of this is already possible! Chris pioneered the development of neurovault.org, an extremely easy to use data sharing site backed by the International Neuroinformatics Coordinating Facility. To use it, researchers simply need to create a new ‘collection’ for their study and then upload whatever images they like. Within about 15 minutes I was able to upload both the T- and contrast-images from my group level analysis, complete with as little or as much meta-data as I felt like including. Collections can be easily linked to paper DOIs and marked as in-review, published, etc. Collections and entries can be edited or added to at any time, and the facilities allow quick documentation of imaging data at any desired level, from entire raw imaging datasets to condition-specific group contrast images. Better still, neurovault seamlessly displays these images on a 3D MNI standard brain with flexible options for thresholding, and through a hookup to neurosynth.org can even seamlessly find meta-analytic feature loadings for your images! Check out these t-map display and feature loadings for the stimulus intensity contrast for my upcoming somatosensory oddball paper, which correctly identified the modality of stimulation!

T-map in the neurovault viewer.

T-map in the neurovault viewer.

Decoded features for my contrast image.

Decoded features for my contrast image, with accurate detection of stimulation modality!

Neurovault.org doesn’t yet support embedding the viewer, but it is easy to imagine that with collaboration from publishers, future versions could be embedded directly within HTML full-text for imaging papers. For now, the site provides the perfect solution for researchers looking to make their data available to others and to more fully present their results, simply by providing supplementary links either to the neurovault collection or directly to individual viewer results. This is a tool that everyone in cognitive neuroscience should be using – I fully intend to do so in all future papers!

Is there a ‘basement’ for quirky psychological research?

Beware the Basement!

Beware the Basement!

 One thing I will never forget from my undergraduate training in psychology was the first lecture of my personality theory class. The professor started the lecture by informing us that he was quite sure that of the 200+ students in the lecture hall, the majority of us were probably majoring in psychology because we thought it would be neat to study sex, consciousness, psychedelics, paranormal experience, meditation, or the ilk. He then informed us this was a trap that befell almost all new psychology students, as we were all drawn to the study of the mind by the same siren call of the weird and wonderful human psyche. However he warned, we should be very, very careful not to reveal these suppressed interests until we were well established (I’m assuming he meant tenured) researchers- otherwise we’d risk being thrown into the infamous ‘basement of psychology’, never to be heard from again.

This colorful lecture really stuck with me through the years; I still jokingly refer to the basement whenever a more quirky research topic comes up. Of course I did a pretty poor job of following this advice, seeing as my first project as a PhD student involved meditation, but nonetheless I have repressed an academic interest in more risque topics throughout my career. And i’m not really actively avoiding them for fear of being placed in the basement – i’m more just following my own pragmatic research interests, and waiting for some day when I have more time and freedom to follow ideas that don’t directly tie into the core research line I’m developing.

But still. That basement. Does it really exist? In a world where papers about having full bladders renders us more politically conservative can make it into prestigious journals, or where scientists scan people having sex inside a scanner just to see what happens, or where psychologists seriously debate the possibility of precognition – can anything really be taboo? Or can we still distinguish from these flightier topics a more serious avenue of research? And what should be said about those who choose such topics?

Personally I think the idea of a ‘basement’ is largely a hold-over from the heyday of behaviorism, when psychologists were seriously concerned about positioning psychology as a hard science. Cognitivism has given rise to an endless bevy of serious topics that would have once been taboo; consciousness, embodiment, and emotion to name a few. Still, in the always-snarky twittersphere, one can’t but help feel that there is still a certain amount of nose thumbing at certain topics.

I think really, in the end, it’s not the topic so much as the method. Chris Frith once told me something to the tune of ‘in [cognitive neuroscience] all the truly interesting phenomenon are beyond proper study’. We know the limitations of brain scans and reaction times, and so tend to cringe a bit when someone tries to trot out the latest silly-super-human special interest infotainment paper.

What do you think? Is there a ‘basement’ for silly research? And if so, what defines what sorts of topics should inhabit its dank confines?

Follow

Get every new post delivered to your Inbox.

Join 15,034 other followers

%d bloggers like this: