The latest thoughts, musings, and data in cognitive science and neuroscience.

[VIDEO] Mind-wandering, meta-cognition, and the function of consciousness

Hey everyone! I recently did an interview for Neuro.TV covering some of my past and current research on mind-wandering, meta-cognition, and conscious awareness. The discussion is very long and covers quite a diversity of topics, so I thought i’d give a little overview here with links to specific times.

For the first 15 minutes, we focus on general research in meta-cognition, and topics like the functional and evolutionary signifigance of metacognition:

We then begin to move onto specific discussion about mind-wandering, around 16:00:

I like our discussion as we quickly get beyond the overly simplistic idea of ‘mind-wandering’ as just attentional failure, reviewing the many ways in which it can drive or support meta-cognitive awareness. We also of course briefly discuss the ‘default mode network’ and the (misleading) idea that there are ‘task positive’ and ‘task negative’ networks in the brain, around 19:00:

Lots of interesting discussion there, in which I try to roughly synthesize some of the overlap and ambiguity between mind-wandering, meta-cognition, and their neural correlates.

Around 36:00 we start discussing my experiment on mind-wandering variability and error awareness:

A great experience in all, and hopefully an interesting video for some! Be sure to support the kickstarter for the next season of Neuro.TV!

JF also has a detailed annotation on the brainfacts blog for the episode:

“0:07″ Introduction
“0:50″ What is cognition?
“4:45″ Metacognition and its relation to confidence.
“10:49″ What is the difference between cognition and metacognition?
“14:07″ Confidence in our memories; does it qualify as metacognition?
“18:34″ Technical challenges in studying mind-wandering scientifically and related brain areas.
“25:00″ Overlap between the brain regions involved in social interactions and those known as the default-mode network.
“29:17″ Why does cognition evolve?
“35:51″ Task-unrelated thoughts and errors in performance.
“50:53″ Tricks to focus on tasks while allowing some amount of mind-wandering.

What’s the causal link dissociating insula responses to salience and bodily arousal?

Just reading this new paper by Lucina Uddin and felt like a quick post. It is a nice review of one of my favorite brain networks, the ever present insular cortex and ‘salience network’ (thalamus, AIC, MCC). As we all know AIC activation is one of the most ubiquitous in our field and generally shows up in everything. Uddin advances the well-supported idea that in addition to being sensitive to visceral, autonomic, bodily states (and also having a causal influence on them), the network responds generally to salient stimuli (like oddballs) across all sensory modalities. We already knew this but a thought leaped to my mind; what is the order of causation here? If the AIC responds to and causes arousal spikes, are oddball responses driven by the novelty of the stimuli or by a first order evoked response in the body? Your brainstem, spinal cord, and PNS are fully capable of creating visceral responses to unexpected stimuli. How can we dissociate ‘dry’ oddball responses from evoked physiological responses? It seems likely that arousal spikes accompany anything unexpected and that salience itself doesn’t really dissociate AIC responses from a more general role of bodily awareness. Recent studies show that oddballs evoke pupil dilation, which is related to arousal.

Check out this figure:


Clearly AIC and ACC not only receive physiological input but also can directly cause phsyio outputs. I’m immediately reminded of an excellent review by Markus Ullsperger and colleagues, where they run into a similar issue trying to work out how arousal cues contribute to conscious error awareness. Ultimately Ullsperger et al conclude that we can’t really dissociate whether arousal cues cause error awareness or error-awareness causes arousal spikes. This seems to also be true for a general salience account.


How can we tease these apart? It seems like we’d need to somehow both knock out and cause physiological responses during the presence and absence of salient stimuli. I’m not sure how we could do this – maybe de-afferentiated patients could get us part of the way there. But a larger problem looms also: the majority of findings cited by Uddin (and to a lesser extent Ullsperger) come from fMRI. Indeed, the original Seeley et al “salience network” paper (one of the top 10 most cited papers in neuroscience) and the original Critchley insula-interoception papers (also a top ten paper) is based on fMRI. Given that these areas are also heavily contaminated by pulse and respiration artifacts, how can we work out the causal loop between salience/perception and arousal? If a salient cue causes a pulse spike then it might also cause a corresponding BOLD artifact. It might be that there is a particularly non-artefactual relationship between salient things and arousal but currently we can’t seem to work out the direction of causation. Worse, it is possible the process driving the artifacts themselves are crucial for ‘salience’ computation, which would mean physio-correction would obscure these important relationships! A tough cookie indeed. Lastly, we’ll need to go beyond the somewhat psychological label of ‘salience’ if we really want to work out these issues. For my money, I think an account based on expected precision fits nicely with the pattern of results we see in these areas, providing a computational mechanism for ‘salience’.

In the end I suspect this is going be one for the direct recording people to solve. If you’ve got access to insula implantees, let me know! :D

Note: folks on twitter said they’d like to see more of the cuff posts – here you go! This post was written in a flurry of thought in about 30 minutes, so please excuse any snarfs! 

Twitter recommends essential reading in pupilometry

Not sure why WordPress is refusing to accept the Storify embed, but click here for some excellent suggestions on reading in pupilometry:

Top 200 terms in cognitive neuroscience according to neurosynth

Tonight I was playing around with some of the top features in neurosynth (the searchable terms with the highest number of studies containing that term). You can find the list here, just sort by the number of studies. I excluded the top 3 terms which are boring (e.g. “image”, “response”, and “time”)  and whose extremely high weights would mess up the wordle. I then created a word-cloud weighted so that the size reflects the number of studies for each term.

Here are the top 200 terms sized according to number times reported in neurosynth’s 5809 indexed fMRI studies:


Pretty neat! These are the 200 terms the neurosynth database has the most information on, and is a pretty good overview of key concepts and topics in our field! I am sure there is something useful for everyone in there :D

Direct link to the wordle:

Wordle: neurosynth

Neurovault: a must-use tool for every neuroimaging paper!

Something that has long irked me about cognitive neuroscience is the way we share our data. I still remember the very first time I opened a brain imaging paper and was struck dumbfounded by the practice of listing activation results in endless p-value tables and selective 2D snapshots. How could anyone make sense of data this way? Now having several years experience creating such papers, I am only more dumbfounded that we continue to present our data in this way. What purpose can be served by taking a beautiful 3-dimensional result and filtering it through an awkward foci ‘photoshoot’? While there are some standards you can use to improve the 2D presentation of 3D brain maps, for example showing only peak activation and including glass-brains, this is an imperfect solution – ultimately the best way to assess the topology of a result is by directly examining the full 3D result.

Just imagine how improved every fMRI paper would be, if instead of a 20+ row table and selective snapshot, results were displayed in a simple 3D viewing widget right in the paper? Readers could assess the underlying effects at whatever statistical threshold they feel is most appropriate, and PDF versions could be printed at a particular coordinate and threshold specified by the author. Reviewers and readers alike could get a much fuller idea of the data, and meta-analysis would be vastly improved by the extensive uploading of well-categorized contrast images. More-over, all this can be easily achieved while excluding worries about privacy and intellectual property, using only group-level contrast images, which are inherently without identifying features and contain only those effects included in the published manuscript!

Now imagine my surprise when I learned that thanks to Chris Gorgolewksi and colleagues, all of this is already possible! Chris pioneered the development of, an extremely easy to use data sharing site backed by the International Neuroinformatics Coordinating Facility. To use it, researchers simply need to create a new ‘collection’ for their study and then upload whatever images they like. Within about 15 minutes I was able to upload both the T- and contrast-images from my group level analysis, complete with as little or as much meta-data as I felt like including. Collections can be easily linked to paper DOIs and marked as in-review, published, etc. Collections and entries can be edited or added to at any time, and the facilities allow quick documentation of imaging data at any desired level, from entire raw imaging datasets to condition-specific group contrast images. Better still, neurovault seamlessly displays these images on a 3D MNI standard brain with flexible options for thresholding, and through a hookup to can even seamlessly find meta-analytic feature loadings for your images! Check out these t-map display and feature loadings for the stimulus intensity contrast for my upcoming somatosensory oddball paper, which correctly identified the modality of stimulation!

T-map in the neurovault viewer.

T-map in the neurovault viewer.

Decoded features for my contrast image.

Decoded features for my contrast image, with accurate detection of stimulation modality! doesn’t yet support embedding the viewer, but it is easy to imagine that with collaboration from publishers, future versions could be embedded directly within HTML full-text for imaging papers. For now, the site provides the perfect solution for researchers looking to make their data available to others and to more fully present their results, simply by providing supplementary links either to the neurovault collection or directly to individual viewer results. This is a tool that everyone in cognitive neuroscience should be using – I fully intend to do so in all future papers!

Is there a ‘basement’ for quirky psychological research?

Beware the Basement!

Beware the Basement!

 One thing I will never forget from my undergraduate training in psychology was the first lecture of my personality theory class. The professor started the lecture by informing us that he was quite sure that of the 200+ students in the lecture hall, the majority of us were probably majoring in psychology because we thought it would be neat to study sex, consciousness, psychedelics, paranormal experience, meditation, or the ilk. He then informed us this was a trap that befell almost all new psychology students, as we were all drawn to the study of the mind by the same siren call of the weird and wonderful human psyche. However he warned, we should be very, very careful not to reveal these suppressed interests until we were well established (I’m assuming he meant tenured) researchers- otherwise we’d risk being thrown into the infamous ‘basement of psychology’, never to be heard from again.

This colorful lecture really stuck with me through the years; I still jokingly refer to the basement whenever a more quirky research topic comes up. Of course I did a pretty poor job of following this advice, seeing as my first project as a PhD student involved meditation, but nonetheless I have repressed an academic interest in more risque topics throughout my career. And i’m not really actively avoiding them for fear of being placed in the basement – i’m more just following my own pragmatic research interests, and waiting for some day when I have more time and freedom to follow ideas that don’t directly tie into the core research line I’m developing.

But still. That basement. Does it really exist? In a world where papers about having full bladders renders us more politically conservative can make it into prestigious journals, or where scientists scan people having sex inside a scanner just to see what happens, or where psychologists seriously debate the possibility of precognition – can anything really be taboo? Or can we still distinguish from these flightier topics a more serious avenue of research? And what should be said about those who choose such topics?

Personally I think the idea of a ‘basement’ is largely a hold-over from the heyday of behaviorism, when psychologists were seriously concerned about positioning psychology as a hard science. Cognitivism has given rise to an endless bevy of serious topics that would have once been taboo; consciousness, embodiment, and emotion to name a few. Still, in the always-snarky twittersphere, one can’t but help feel that there is still a certain amount of nose thumbing at certain topics.

I think really, in the end, it’s not the topic so much as the method. Chris Frith once told me something to the tune of ‘in [cognitive neuroscience] all the truly interesting phenomenon are beyond proper study’. We know the limitations of brain scans and reaction times, and so tend to cringe a bit when someone tries to trot out the latest silly-super-human special interest infotainment paper.

What do you think? Is there a ‘basement’ for silly research? And if so, what defines what sorts of topics should inhabit its dank confines?

We the Kardashians are Democratizing Science

I had a good laugh this weekend at a paper published to Genome Biology. Neil Hall, the author of the paper and well-established Liverpool biologist, writes that in the brave new era of social media, there “is a danger that this form of communication is gaining too high a value and that we are losing sight of key metrics of scientific value, such as citation indices.” Wow, what a punchline! According to Neil, we’re in danger of forgetting that tweets and blogposts are, according to him, the worthless gossip of academia. After all, who reads Nature and Science these days?? I know so many colleagues getting big grants and tenure track jobs just over their tweets! Never mind that Neil himself has about 11 papers published in Nature journals – or perhaps we are left to sympathize with the poor, untweeted author? Outside of bitter sarcasm, the article is a fun bit of satire, and I’d like to think charitably that it was aimed not only at ‘altmetrics’, but at the metric enterprise in general. Still, I agree totally with Kathryn Clancy that the joke fails insofar as it seems to be ‘punching down’ at those of us with less established CVs than Neil, who take to social media in order to network and advance our own fledgling research profiles. I think it also belies a critical misapprehension of how social media fits into the research ecosystem common among established scholars. This sentiment is expressed rather precisely by Neil when discussing his Kardashian index:

The Kardashian Index

The Kardashian Index

“In an age dominated by the cult of celebrity we, as scientists, need to protect ourselves from mindlessly lauding shallow popularity and take an informed and critical view of the value we place on the opinion of our peers. Social media makes it very easy for people to build a seemingly impressive persona by essentially ‘shouting louder’ than others. Having an opinion on something does not make one an expert.”

So there you have it. Twitter equals shallow popularity. Never mind the endless possibilities of having seamless networked interactions with peers from around the world. Never mind sharing the latest results, discussing them, and branching these interactions into blog posts that themselves evolve into papers. Forget entirely that without this infosphere of interaction, we’d be left totally at the whims of Impact Factor to find interesting papers among the thousands published daily. What it’s really all about is building a “seemingly impressive persona” by “shouting louder than others”. What then does constitute effective scientific output, Neil? The answer it seems – more high impact papers:

“I propose that all scientists calculate their own K-index on an annual basis and include it in their Twitter profile. Not only does this help others decide how much weight they should give to someone’s 140 character wisdom, it can also be an incentive – if your K-index gets above 5, then it’s time to get off Twitter and write those papers.”

Well then, I’m glad we covered that. I’m sure there were many scientists or scholars out there who amid the endless cycle of insane job pressure, publish or perish horse-racing, and blood feuding for grants thought, ‘gee I’d better just stop this publishing thing entirely and tweet instead’. And likewise, I’m sure every young scientist looks at ‘Kardashians’ and thinks, ‘hey I’d better suspend all critical thinking, forget all my training, and believe everything this person says’. I hope you can feel me rolling my eyes.  Seriously though – this represents a fundamental and common misunderstanding of the point of all this faffing about on the internet. Followers, impact, and notoriety are all poorly understood side-effects of this process; they are neither the means nor goal. And never mind those less concrete (and misleading) contributions like freely shared code, data, or thoughts – the point here is to blather and gossip!

While a (sorta) funny joke, it is this point that is done the most disservice by Neil’s article. We (the Kardashians) are democratizing science. We are filtering the literally unending deluge of papers to try and find the most outrageous, the most interesting, and the most forgotten, so that they can see the light of day beyond wherever they were published and forgotten. We seek these papers to generate discussion and to garner attention where it is needed most. We are the academy’s newest, first line of defense, contextualizing results when the media runs wild with them. We tweet often because there is a lot to tweet, and we gain followers because the things we tweet are interesting. And we do all of this without the comfort of a lofty CV or high impact track record, with little concrete assurance that it will even benefit us, all while still trying to produce the standard signs of success. And it may not seem like it now – but in time it will be clear that what we do is just as much a part of the scientific process as those lofty Nature papers. Are we perfect? No. Do we sometimes fall victim to sensationalism or crowd mentality? Of course – we are only fallible human beings, trying to find and create utility within a new frontier. We may not be the filter science deserves – but we are the one it needs. Wear your Kardshian index with pride.

oh BOLD where art thou? Evidence for a “mm-scale” match between intracortical and fMRI measures.

A frequently discussed problem with functional magnetic resonance imaging is that we don’t really understand how the hemodynamic ‘activations’ measured by the technique relate to actual neuronal phenomenon. This is because fMRI measures the Blood-Oxygenation-Level Dependent (BOLD) signal, a complex vascular response to neuronal activity. As such, neuroscientists can easily get worried about all sorts of non-neural contributions to the BOLD signal, such as subjects gasping for air, pulse-related motion artefacts, and other generally uninteresting effects. We can even start to worry that out in the lab, the BOLD signal may not actually measure any particular aspect of neuronal activity, but rather some overly diluted, spatially unconstrained filter that simply lacks the key information for understanding brain processes.

Given that we generally use fMRI over neurophysiological methods (e.g. M/EEG) when we want to say something about the precise spatial generators of a cognitive process, addressing these ambiguities is of utmost importance. Accordingly a variety of recent papers have utilized multi-modal techniques, for example combining optogenetics, direct recordings, and FMRI, to assess particularly which kinds of neural events contribute to alterations in the BOLD signal and it’s spatial (mis)localization. Now a paper published today in Neuroimage addresses this question by combining high resolution 7-tesla fMRI with Electrocorticography (ECoG) to determine the spatial overlap of finger-specific somatomotor representations captured by the measures. Starting from the title’s claim that “BOLD matches neuronal activity at the mm-scale”, we can already be sure this paper will generate a great deal of interest.

From Siero et al (In Press)

As shown above, the authors managed to record high resolution (1.5mm) fMRI in 2 subjects implanted with 23 x 11mm intracranial electrode arrays during a simple finger-tapping task. Motor responses from each finger were recorded and used to generate somatotopic maps of brain responses specific to each finger. This analysis was repeated in both ECoG and fMRI, which were then spatially co-registered to one another so the authors could directly compare the spatial overlap between the two methods. What they found appears at first glance, to be quite impressive:
From Siero et al (In Press)

Here you can see the color-coded t-maps for the BOLD activations to each finger (top panel, A), the differential contrast contour maps for the ECOG (middle panel, B), and the maximum activation foci for both measures with respect to the electrode grid (bottom panel, C), in two individual subjects. Comparing the spatial maps for both the index and thumb suggests a rather strong consistency both in terms of the topology of each effect and the location of their foci. Interestingly the little finger measurements seem somewhat more displaced, although similar topographic features can be seen in both. Siero and colleagues further compute the spatial correlation (Spearman’s R) across measures for each individual finger, finding an average correlation of .54, with a range between .31-.81, a moderately high degree of overlap between the measures. Finally the optimal amount of shift needed to reduce spatial difference between the measures was computed and found to be between 1-3.1 millimetres, suggesting a slight systematic bias between ECoG and fMRI foci.

Are ‘We the BOLD’ ready to breakout the champagne and get back to scanning in comfort, spatial anxieties at ease? While this is certainly a promising result, suggesting that the BOLD signal indeed captures functionally relevant neuronal parameters with reasonable spatial accuracy, it should be noted that the result is based on a very-best-case scenario, and that a considerable degree of unique spatial variance remains for the two methods. The data presented by Siero and colleagues have undergone a number of crucial pre-processing steps that are likely to influence their results: the high degree of spatial resolution, the manual removal of draining veins, the restriction of their analysis to grey-matter voxels only, and the lack of spatial smoothing all render generalizing from these results to the standard 3-tesla whole brain pipeline difficult. Indeed, even under these best-case criteria, the results still indicate up to 3mm of systematic bias in the fMRI results. Though we can be glad the bias was systematic and not random– 3mm is still quite a lot in the brain. On this point, the authors note that the stability of the bias may point towards a systematic miss-registration of the ECoG and FMRI data and/or possible rigid-body deformations introduced by the implantation of the electrodes), issues that could be addressed in future studies. Ultimately it remains to be seen whether similar reliability can be obtained for less robust paradigms than finger wagging, obtained in the standard sub-optimal imaging scenarios. But for now I’m happy to let fMRI have its day in the sun, give or take a few millimeters.

Siero, J. C. W., Hermes, D., Hoogduin, H., Luijten, P. R., Ramsey, N. F., & Petridou, N. (2014). BOLD matches neuronal activity at the mm scale: A combined 7T fMRI and ECoG study in human sensorimotor cortex. NeuroImage. doi:10.1016/j.neuroimage.2014.07.002


#MethodsWeDontReport – brief thought on Jason Mitchell versus the replicators

This morning Jason Mitchell self-published an interesting essay espousing his views on why replication attempts are essentially worthless. At first I was merely interested by the fact that what would obviously become a topic of heated debate was self-published, rather than going through the long slog of a traditional academic medium. Score one for self publication, I suppose. Jason’s argument is essentially that null results don’t yield anything of value and that we should be improving the way science is conducted and reported rather than publicising our nulls. I found particularly interesting his short example list of things that he sees as critical to experimental results which nevertheless go unreported:

These experimental events, and countless more like them, go unreported in our method section for the simple fact that they are part of the shared, tacit know-how of competent researchers in my field; we also fail to report that the experimenters wore clothes and refrained from smoking throughout the session.  Someone without full possession of such know-how—perhaps because he is globally incompetent, or new to science, or even just new to neuroimaging specifically—could well be expected to bungle one or more of these important, yet unstated, experimental details.

While I don’t agree with the overall logic or conclusion of Jason’s argument (I particularly like Chris Said’s Bayesian response), I do think it raises some important or at least interesting points for discussion. For example, I agree that there is loads of potentially important stuff that goes on in the lab, particularly with human subjects and large scanners, that isn’t reported. I’m not sure to what extent that stuff can or should be reported, and I think that’s one of the interesting and under-examined topics in the larger debate. I tend to lean towards the stance that we should report just about anything we can – but of course publication pressures and tacit norms means most of it won’t be published. And probably at least some of it doesn’t need to be? But which things exactly? And how do we go about reporting stuff like how we respond to random participant questions regarding our hypothesis?

To find out, I’d love to see a list of things you can’t or don’t regularly report using the #methodswedontreport hashtag. Quite a few are starting to show up- most are funny or outright snarky (as seems to be the general mood of the response to Jason’s post), but I think a few are pretty common lab occurrences and are even though provoking in terms of their potentially serious experimental side-effects. Surely we don’t want to report all of these ‘tacit’ skills in our burgeoning method sections; the question is which ones need to be reported, and why are they important in the first place?

The return of neuroconscience

Hello everyone! After an amazing visit back home to Tampa Florida for VSS and a little R&R in Denmark i’m back and feeling better than ever. Some of you may have noticed that i’ve been on an almost 6 month blogging hiatus. I’m just going to come right out and admit that after moving from Denmark to London, I really wasn’t sure what direction I wanted to take my blog. Changing institutions is always a bit of a bewildering experience, and a wise friend once advised me that it’s sometimes best to quietly observe new surroundings before diving right in. I think I needed some time to get used to being a part of the awesomeness that is the Queen Square neuroimaging hub. I also needed some time to reflect on the big picture of my research, this blog, and my overall social media presence.

But fear not! After the horrors of settling into London, I’m finally comfortable in my skin again with a new flat, a home office almost ready, and lots and lots of new ideas to share with you. I think part of my overall hesitancy was a kind of pondering just what I should be sharing. But I didn’t get this far by bottling up my research, so there isn’t much point in shuttering myself in now! I expect to be back to blogging in full form in the next week, as new projects here begin to get underway. But where is my research going?

The big picture will largely remain the same. I am interested as always in human consciousness, thought, self-awareness, and our capacity for growth along these dimensions. One thing I really love about my post-doc is that I’ve finally found a kind of thread weaving throughout my research all the way back to the days when I collected funny self-narratives in a broom closet at UCF. I think you could say I’m trying to connect the dots between how dynamic bodies shape and interact with our reflective minds, using the tools of perceptual decision making, predictive coding, and neuroimaging. Currently i’m developing a variety of novel experimental paradigms examining embodied self-awareness (i.e. our somatosensory, interoceptive, and affective sense of self), perceptual decision making and metacognition, and interrelations between these. You can expect to hear more about these topics soon.

Indeed, a principle reason I chose to join the FIL/ICN team was the unique emphasis and expertise here on predictive coding. My research has always been united by an interest in growth, plasticity, and change. During my PhD I came to see predictive coding/free energy schemes as a unifying framework under-which to unite our understanding of embodied and neural computation in terms of our ability to learn from new experiences.  As such I’m very happy to be in a place where not only can I be on the cutting edge of theoretical development, but also receive first-hand training in applying the latest computational modelling, connectivity, and multi-modal imaging techniques to my research questions. As always, given my obvious level of topical ADHD, you can be sure to expect coverage of a wide-range of cogneuro and cogsci topics.

So in general, you can expect posts covering these topics, my upcoming results, and general musings along these lines. As always i’m sure there will be plenty of methodsy nitpicking and philosophical navel gathering. In particular, my recent experience with a reviewer insisting that ‘embodiment’ = interoception has me itching to fire off a theoretical barrage – but I guess I should wait to publish that paper before taking to the streets. In the near future I have planned a series of short posts covering some of the cool posters and general themes I observed at the Vision Sciences Society conference this fall.

Finally, for my colleagues working on mindfulness and meditation research, a brief note. As you can probably gather, I don’t intend to return to this domain of study in the near future. My personal opinion of that topic is that it has become incredibly overhyped and incestous- the best research simply isn’t rising to the top. I know that many of the leaders in that community are well aware of that problem and are working to correct it, but for me I knew it was time to part ways and return to more general research. I do believe that mindfulness has an important role to play in both self-awareness and well-being, and hope that the models I am currently developing might one day further refine our understanding of these practices. However, I guess it’s worth noting that for me, meditation was always more of a kind of Varellian way to manipulate plasticity and consciousness rather than an end in itself; as I no longer buy into the enactive/neurophenomenological paradigm, I guess it’s self explanatory that I would be moving on to other things (like actual consciousness studies! :P). I do hope to see that field continue to grow and mature, and look forward to fruitful collaborations along those lines.


That’s it folks! Prepare yourself for a new era of neuroconscience :) Cheers to an all new year, all new research, and new directions! Viva la awareness!




Get every new post delivered to your Inbox.

Join 13,107 other followers

%d bloggers like this: