Neuroconscience

The latest thoughts, musings, and data in cognitive science and neuroscience.

Tag: philosophy of mind

Quick post – Dan Dennett’s Brain talk on Free Will vs Moral Responsibility

As a few people have asked me to give some impression of Dan’s talk at the FIL Brain meeting today, i’m just going to jot my quickest impressions before I run off to the pub to celebrate finishing my dissertation today. Please excuse any typos as what follows is unedited! Dan gave a talk very similar to his previous one several months ago at the UCL philosophy department. As always Dan gave a lively talk with lots of funny moments and appeals to common sense. Here the focus was more on the media activities of neuroscientists, with some particularly funny finger wagging at Patrick Haggard and Chris Frith. Some good bits where his discussion of evidence that priming subjects against free will seems to make them more likely to commit immoral acts (cheating, stealing) and a very firm statement that neuroscience is being irresponsible complete with bombastic anti-free will quotes by the usual suspects. Although I am a bit rusty on the mechanics of the free will debate, Dennett essentially argued for a compatiblist  view of free will and determinism. The argument goes something like this: the basic idea that free will is incompatible with determinism comes from a mythology that says in order to have free will, an agent must be wholly unpredictable. Dennett argues that this is absurd, we only need to be somewhat unpredictable. Rather than being perfectly random free agents, Dennett argues that what really matters is moral responsibility pragmatically construed.  Dennett lists a “spec sheet” for constructing a morally responsible agent including “could have done otherwise, is somewhat unpredictable, acts for reasons, is subject to punishment…”. In essence Dan seems to be claiming that neuroscientists don’t really care about “free will”, rather we care about the pragmatic limits within which we feel comfortable entering into legal agreements with an agent. Thus the job of the neuroscientists is not to try to reconcile the folk and scientific views of “free will”, which isn’t interesting (on Dennett’s acocunt) anyway, but rather to describe the conditions under which an agent can be considered morally responsible. The take home message seemed to be that moral responsibility is essentially a political rather than metaphysical construct. I’m afraid I can’t go into terrible detail about the supporting arguments- to be honest Dan’s talk was extremely short on argumentation. The version he gave to the philosophy department was much heavier on technical argumentation, particularly centered around proving that compatibilism doesn’t contradict with “it could have been otherwise”. In all the talk was very pragmatic, and I do agree with the conclusions to some degree- that we ought to be more concerned with the conditions and function of “will” and not argue so much about the meta-physics of “free”. Still my inner philosopher felt that Dan is embracing some kind of basic logical contradiction and hand-waving it away with funny intuition pumps, which for me are typically unsatisfying.

For reference, here is the abstract of the talk:

Nothing—yet—in neuroscience shows we don’t have free will

Contrary to the recent chorus of neuroscientists and psychologists declaring that free will is an illusion, I’ll be arguing (not for the first time, but with some new arguments and considerations) that this familiar claim is so far from having been demonstrated by neuroscience that those who advance it are professionally negligent, especially given the substantial social consequences of their being believed by lay people. None of the Libet-inspired work has the drastic implications typically adduced, and in fact the Soon et al (2008) work, and its descendants, can be seen to demonstrate an evolved adaptation to enhance our free will, not threaten it. Neuroscientists are not asking the right questions about free will—or what we might better call moral competence—and once they start asking and answering the right questions we may discover that the standard presumption that all “normal” adults are roughly equal in moral competence and hence in accountability is in for some serious erosion. It is this discoverable difference between superficially similar human beings that may oblige us to make major revisions in our laws and customs. Do we human beings have free will? Some of us do, but we must be careful about imposing the obligations of our good fortune on our fellow citizens wholesale.

Enactive Bayesians? Response to “the brain as an enactive system” by Gallagher et al

Shaun Gallagher has a short new piece out with Hutto, Slaby, and Cole and I felt compelled to comment on it. Shaun was my first mentor and is to thank for my understanding of what is at stake in a phenomenological cognitive science. I jumped on this piece when it came out because, as I’ve said before, enactivists often  leave a lot to be desired when talking about the brain. That is to say, they more often than not leave it out entirely and focus instead on bodies, cultural practices, and other parts of our extra-neural milieu. As a neuroscientist who is enthusiastically sympathetic to the embodied, enactive approach to cognition, I find this worrisome. Which is to say that when I’ve tried to conduct “neurophenomenological” experiments, I often feel a bit left in the rain when it comes time construct, analyze, and interpret the data.

As an “enactive” neuroscientist, I often find the de-emphasis of brains a bit troubling. For one thing, the radically phenomenological crew tends to make a lot of claims to altering the foundations of neuroscience. Things like information processing and mental representation are said to be stale, Cartesian constructs that lack ontological validity and want to be replaced. This is fine- I’m totally open to the limitations of our current explanatory framework. However as I’ve argued here, I believe neuroscience still has great need of these tools and that dynamical systems theory is not ready for prime time neuroscience. We need a strong positive account of what we should replace them with, and that account needs to act as a practical and theoretical guide to discovery.

One worry I have is that enactivism quickly begins to look like a constructivist version of behaviorism, focusing exclusively on behavior to the exclusion of the brain. Of course I understand that this is a bit unfair; enactivism is about taking a dynamical, encultured, phenomenological view of the human being seriously. Yet I believe to accomplish this we must also understand the function of the nervous system. While enactivists will often give token credit to the brain- affirming that is indeed an ‘important part’ of the cognitive apparatus, they seem quick to value things like clothing and social status over gray matter. Call me old fashioned but, you could strip me of job, titles, and clothing tomorrow and I’d still be capable of 80% of whatever I was before. Granted my cognitive system would undergo a good deal of strain, but I’d still be fully capable of vision, memory, speech, and even consciousness. The same can’t be said of me if you start magnetically stimulating my brain in interesting and devious ways.

I don’t want to get derailed arguing about the explanatory locus of cognition, as I think one’s stances on the matter largely comes down to whatever your intuitive pump tells you is important.  We could argue about it all day; what matters more than where in the explanatory hierarchy we place the brain, is how that framework lets us predict and explain neural function and behavior. This is where I think enactivism often fails; it’s all fire and bluster (and rightfully so!) when it comes to the philosophical weaknesses of empirical cognitive science, yet mumbles and missteps when it comes to giving positive advice to scientists. I’m all for throwing out the dogma and getting phenomenological, but only if there’s something useful ready to replace the methodological bathwater.

Gallagher et al’s piece starts:

 “… we see an unresolved tension in their account. Specifically, their questions about how the brain functions during interaction continue to reflect the conservative nature of ‘normal science’ (in the Kuhnian sense), invoking classical computational models, representationalism, localization of function, etc.”

This is quite true and an important tension throughout much of the empirical work done under the heading of enactivism. In my own group we’ve struggled to go from the inspiring war cries of anti-representationalism and interaction theory to the hard constraints of neuroscience. It often happens that while the story or theoretical grounding is suitably phenomenological and enactive, the methodology and their interpretation are necessarily cognitivist in nature.

Yet I think this difficulty points to the more difficult task ahead if enactivism is to succeed. Science is fundamentally about methodology, and methodology reflects and is constrained by one’s ontological/explanatory framework. We measure reaction times and neural signal lags precisely because we buy into a cognitivist framework of cognition, which essentially argues for computations that take longer to process with increasing complexity, recruiting greater neural resources. The catch is, without these things it’s not at all clear how we are to construct, analyze, and interpret our data.  As Gallagher et al correctly point out, when you set out to explain behavior with these tools (reaction times and brain scanners), you can’t really claim to be doing some kind of radical enactivism:

 “Yet, in proposing an enactive interpretation of the MNS Schilbach et al. point beyond this orthodox framework to the possibility of rethinking, not just the neural correlates of social cognition, but the very notion of neural correlate, and how the brain itself works.”

We’re all in agreement there: I want nothing more than to understand exactly how it is our cerebral organ accomplishes the impressive feats of locomotion, perception, homeostasis, and so on right up to consciousness and social cognition. Yet I’m a scientist and no matter what I write in my introduction I must measure something- and what I measure largely defines my explanatory scope. So what do Gallagher et al offer me?

 “The enactive interpretation is not simply a reinterpretation of what happens extra-neurally, out in the intersubjective world of action where we anticipate and respond to social affordances. More than this, it suggests a different way of conceiving brain function, specifically in non-representational, integrative and dynamical terms (see e.g., Hutto and Myin, in press).”

Ok, so I can’t talk about representations. Presumably we’ll call them “processes” or something like that. Whatever we call them, neurons are still doing something, and that something is important in producing behavior. Integrative- I’m not sure what that means, but I presume it means that whatever neurons do, they do it across sensory and cognitive modalities. Finally we come to dynamical- here is where it gets really tricky. Dynamical systems theory (DST) is an incredibly complex mathematical framework dealing with topology, fluid dynamics, and chaos theory. Can DST guide neuroscientific discovery?

This is a tough question. My own limited exposure to DST prevents me from making hard conclusions here. For now let’s set it aside- we’ll come back to this in a moment. First I want to get a better idea of how Gallagher et al characterize contemporary neuroscience, the source of this tension in Schillbach et al:

Functional MRI technology goes hand in hand with orthodox computational models. Standard use of fMRI provides an excellent tool to answer precisely the kinds of questions that can be asked within this approach. Yet at the limits of this science, a variety of studies challenge accepted views about anatomical and functional segregation (e.g., Shackman et al. 2011; Shuler and Bear 2006), the adequacy of short-term task- based fMRI experiments to provide an adequate conception of brain function (Gonzalez-Castillo et al. 2012), and individual differences in BOLD signal activation in subjects performing the same cognitive task (Miller et al. 2012). Such studies point to embodied phenomena (e.g., pain, emotion, hedonic aspects) that are not appropriately characterized in representational terms but are dynamically integrated with their central elaboration.

Claim one is what I’ve just argued above, that fMRI and similar tools presuppose computational cognitivism. What follows I feel is a mischaracterization of cognitive neuroscience. First we have the typical bit about functional segregation being extremely limited. It surely is and I think most neuroscientists today would agree that segregation is far from the whole story of the brain. Which is precisely why the field is undeniably and swiftly moving towards connectivity and functional integration, rather than segregation. I’d wager that for a few years now the majority of published cogneuro papers focus on connectivity rather than blobology.

Next we have a sort of critique of the use of focal cognitive tasks. This almost seems like a critique of science itself; while certainly not without limits, neuroscientists rely on such tasks in order to make controlled assessments of phenomena. There is nothing a priori that says a controlled experiment is necessarily cognitivist anymore so than a controlled physics experiment must necessarily be Newtonian rather than relativistic. And again, I’d characterize contemporary neuroscience as being positively in love with “task-free” resting state fMRI. So I’m not sure at what this criticism is aimed.

Finally there is this bit about individual differences in BOLD activation. This one I think is really a red herring; there is nothing in fMRI methodology that prevents scientists from assessing individual differences in neural function and architecture. The group I’m working with in London specializes in exactly this kind of analysis, which is essentially just creating regression models with neural and behavioral independent and dependent variables. There certainly is a lot of variability in brains, and neuroscience is working hard and making strides towards understanding those phenomena.

 “Consider also recent challenges to the idea that so-called “mentalizing” areas (“cortical midline structures”) are dedicated to any one function. Are such areas activated for mindreading (Frith and Frith 2008; Vogeley et al. 2001), or folk psychological narrative (Perner et al. 2006; Saxe & Kanwisher 2003); a default mode (e.g., Raichle et al. 2001), or other functions such as autobiographical memory, navigation, and future planning (see Buckner and Carroll 2006; 2007; Spreng, Mar and Kim 2008); or self -related tasks(Northoff & Bermpohl 2004); or, more general reflective problem solving (Legrand andRuby 2010); or are they trained up for joint attention in social interaction, as Schilbach etal. suggest; or all of the above and others yet to be discovered.

I guess this paragraph is supposed to get us thinking that these seem really different, so clearly the localizationist account of the MPFC fails. But as I’ve just said, this is for one a bit of a red herring- most neuroscientists no longer believe exclusively in a localizationist account. In fact more and more I hear top neuroscientists disparaging overly blobological accounts and referring to prefrontal cortex as a whole. Functional integration is here to stay. Further, I’m not sure I buy their argument that these functions are so disparate- it seems clear to me that they all share a social, self-related core probably related to the default mode network.

Finally, Gallagher and company set out to define what we should be explaining- behavior as “a dynamic relation between organisms, which include brains, but also their own structural features that enable specific perception-action loops involving social and physical environments, which in turn effect statistical regularities that shape the structure of the nervous system.” So we do want to explain brains, but we want to understand that their setting configures both neural structure and function. Fair enough, I think you would be hard pressed to find a neuroscientist who doesn’t agree that factors like environment and physiology shape the brain. [edit: thanks to Bryan Patton for pointing out in the comments that Gallagher's description of behavior here is strikingly similar to accounts given by Friston's Free Energy Principle predictive coding account of biological organisms]

Gallagher asks then, “what do brains do in the complex and dynamic mix of interactions that involve full-out moving bodies, with eyes and faces and hands and voices; bodies that are gendered and raced, and dressed to attract, or to work or play…?” I am glad to see that my former mentor and I agree at least on the question at stake, which seems to be, what exactly is it brains do? And we’re lucky in that we’re given an answer by Gallagher et al:

“The answer is that brains are part of a system, along with eyes and face and hands and voice, and so on, that enactively anticipates and responds to its environment.”

 Me reading this bit: “yep, ok, brains, eyeballs, face, hands, all the good bits. Wait- what?” The answer is “… a system that … anticipates and responds to its environment.” Did Karl Friston just enter the room? Because it seems to me like Gallagher et al are advocating a predictive coding account of the brain [note: see clarifying comment by Gallagher, and my response below]! If brains anticipate their environment then that means they are constructing a forward model of their inputs. A forward model is a Bayesian statistical model that estimates posterior probabilities of a stimulus from prior predictions about its nature. We could argue all day about what to call that model, but clearly what we’ve got here is a brain using strong internal models to make predictions about the world. Now what is “enactive” about these forward models seems like an extremely ambiguous notion.

To this extent, Gallagher includes “How an agent responds will depend to some degree on the overall dynamical state of the brain and the various, specific and relevant neuronal processes that have been attuned by evolutionary pressures, but also by personal experiences” as a description of how a prediction can be enactive. But none of this is precluded by the predictive coding account of the brain. The overall dynamical state (intrinsic connectivity?) of the brain amounts to noise that must be controlled through increasing neural gain and precision. I.e., a Bayesian model presupposes that the brain is undergoing exactly these kinds of fluctuations and makes steps to produce optimal behavior in the face of such noise.

Likewise the Bayesian model is fully hierarchical- at all levels of the system the local neural function is constrained and configured by predictions and error signals from the levels above and below it. In this sense, global dynamical phenomena like neuromodulation structure prediction in ways that constrain local dynamics.  These relationships can be fully non-linear and dynamical in nature (See Friston 2009 for review). Of the other bits –  evolution and individual differences, Karl would surely say that the former leads to variation in first priors and the latter is the product of agents optimizing their behavior in a variable world.

So there you have it- enactivist cognitive neuroscience is essentially Bayesian neuroscience. If I want to fulfill Gallagher et al’s prescriptions, I need merely use resting state, connectivity, and predictive coding analysis schemes. Yet somehow I think this isn’t quite what they meant- and there for me, lies the true tension in ‘enactive’ cognitive neuroscience. But maybe it is- Andy Clark recently went Bayesian, claiming that extended cognition and predictive coding are totally compatible. Maybe it’s time to put away the knives and stop arguing about representations. Yet I think an important tension remains: can we explain all the things Gallagher et al list as important using prior and posterior probabilities? I’m not totally sure, but I do know one thing- these concepts make it a hell of a lot easier to actually analyze and interpret my data.

fake edit:

I said I’d discuss DST, but ran out of space and time. My problem with DST boils down to this: it’s descriptive, not predictive. As a scientist it is not clear to me how one actually applies DST to a given experiment. I don’t see any kind of functional ontology emerging by which to apply the myriad of DST measures in a principled way. Mental chronometry may be hokey and old fashioned, but it’s easy to understand and can be applied to data and interpreted readily. This is a huge limitation for a field as complex as neuroscience, and as rife with bad data. A leading dynamicist once told me that in his entire career “not one prediction he’d made about (a DST measure/experiment) had come true, and that to apply DST one just needed to “collect tons of data and then apply every measure possible until one seemed interesting”. To me this is a data fishing nightmare and does not represent a reliable guide to empirical discovery.

What are the critical assumptions of neuroscience?

In light of all the celebration surrounding the discovery of a Higgs-like particle, I found it amusing that nearly 30 years ago Higg’s theory was rejected by CERN as ‘outlandish’. This got me to wondering, just how often is scientific consensus a bar to discovery? Scientists are only human, and as such can be just as prone to blindspots, biases, and herding behaviors as other humans. Clearly the scientific method and scientific consensus (e.g. peer review) are the tools we rely on to surmount these biases. Yet, every tool has it’s misuse, and sometimes the wisdom of the crowd is just the aggregate of all these biases.

At this point, David Zhou pointed out that when scientific consensus leads to rejection of correct viewpoints, it’s often due to the strong implicit assumptions that the dominant paradigm rests upon. Sometimes there are assumptions that support our theories which, due to a lack of either conceptual or methodological sophistication, are not amenable to investigation. Other times we simply don’t see them; when Chomsky famously wrote his review of Skinner’s verbal behavior, he simply put together all the pieces of the puzzle that were floating around, and in doing so destroyed a 20-year scientific consensus.

Of course, as a cognitive scientist studying the brain, I often puzzle over what assumptions I critically depend upon to do my work. In an earlier stage of my training, I was heavily inundated with ideas from the “embodied, enactive, extended” framework, where it is common to claim that the essential bias is an uncritical belief in the representational theory of mind. While I do still see problems in mainstream information theory, I’m no longer convinced that an essentially internalist, predictive-coding account of the brain is without merit. It seems to me that the “revolution” of externalist viewpoints turned out to be more of an exercise in house-keeping, moving us beyond overly simplistic “just-so” evolutionary diatribes,and  empty connectionism, to introducing concepts from dynamical systems to information theory in the context of cognition.

So, really i’d like to open this up: what do you think are assumptions neuroscientists cannot live without? I don’t want to shape the discussion too much, but here are a few starters off the top of my head:

  • Nativism: informational constraints are heritable and innate, learning occurs within these bounds
  • Representation: Physical information is transduced by the senses into abstract representations for cognition to manipulate
  • Frequentism: While many alternatives currently abound, for the most part I think many mainstream neuroscientists are crucially dependent on assessing differences in mean and slope. A related issue is a tendency to view variability as “noise”
  • Mental Chronometry: related to the representational theory of mind is the idea that more complex representations take longer to process and require more resources. Thus greater (BOLD/ERP/RT) equals a more complex process.
  • Evolution: for a function to exist it must be selected for by evolutionary natural selection

That’s all off the top of my head. What do you think? Are these essential for neuroscience? What might a cognitive theory look like without these, and how could it motivate empirical research? For me, each of these are in some way quite helpful in terms of providing a framework to interpret reaction-time, BOLD, or other cognition related data. Have I missed any?

Zombies or Cyborgs?

On March 9th, I will be giving a talk in collaboration with my colleague Yishay Mor at the London Knowledge Lab. See below for links and the abstract of my upcoming talk

Upcoming talk @ the London Knowledge Lab

“[Social networking sites] are devoid of cohesive narrative and long-term significance. As a consequence, the mid-21st century mind might almost be infantilized, characterized by short attention spans, sensationalism, inability to empathize and a shaky sense of identity”.
-The Baroness Greenfield

“Just as I might use pen and paper to freeze my own half-baked thoughts, turning them into stable objects for further thought and reflection, so we (as a society) learned to use the written word to power a process of collective thinking and critical reason. The tools of text thus allow us at multiple scales, to create new stable objects for critical activity with speech, text, and the tradition of using them as critical tools under our belts, humankind entered the first phase of its cyborg existence”
– Andy Clark on the 1st Technocognitive Revolution, Natural Born Cyborgs

While some present the dawn of the social web as a doomsday, we believe that social media technologies represent a secondary revolution to that described above by cyborg cognition theorist Andy Clark. Trapped within this debate lies the brain; recent advances in the neurosciences have thrown open our concept of the brain, revealing a neural substrate that is highly flexible and plastic (Green and Bavelier 2008). This phenomenal level of plasticity likely underpins much of what separates us from the animal kingdom, through a profound enhancement of our ability to use new technologies and their cultural co-products (Clark and Chalmers 1998; Schoenemann, et al. 2005; Shaw, et al. 2006). Yet many fear that this plasticity represents a precise threat to our cognitive stability in light of the technological invasion of Twitter-like websites. By investigating how the brain changes as we undergo profound self alteration via digital meditation, we can begin to unravel the biological mysteries of plasticity that underpin a vast array of issues in the humanities and social sciences.

We propose to investigate functional and structural brain differences between high and low intensity users. Due to the what we view as a primarily folk psychological or narratological nature of SNS usage, we will utilize classical Theory-of-Mind tasks within the functional MRI environment, coupled with exploratory structural and functional connectivity analyses. To characterize differences in social networking behavior, we will utilize cluster-analysis and self-reported usage intensity scales. These will allow us to construct an fMRI task in which the mentalistic capacities for both real world and Facebook-specific friends are compared and contrasted, illuminating the precise impact of digitally mediated interaction on existing theory of mind capacities. We hypothesize that SNS usage intensity will positively correlate with functional brain activity increases in areas associated with theory of mind (MPFC & TPJ). We further suspect that that these measures will co-correlate with structural white matter increases within these regions, and collectively, with default mode network activity within high intensity users. Such findings would indicate that digitally mediated social networking represents a novel form of targeted social-cognitive self stimulation.

Micah Allen (neuroconscience) is a PhD student at √Örhus University, where he is working in collaboration with Interacting Minds and the Danish Center For Functionally Integrative Neuroscience (CFIN). His PhD focus is within Cognitive Neuroscience, specifically on the topic of Cognitive Neuroplasticity or the study of how biological and cognitive adaptation relate to one another. His research examines high-level brain plasticity in response to spiritual, cultural and technological practices, organized under the concept of ‘neurological self stimulation’. This research includes longitudinal investigations of meditation, structural connectivity, and default mode brain activity. Micah’s research is informed by and integrated within philosophies of embodiment, social cognition, enactivism, and cyborg phenomenology.

The Interacting Minds (im.net) project at Aarhus University examines the links between the human capacity for minds to interact and the putative biological substrate, which enables this to happen. It is housed at the Danish National Research Foundation’s Center of Functionally Integrative Neuroscience (CFIN), a cross disciplinary brain research centre at Aarhus University and Aarhus University Hospital. CFIN does both basic research – e.g. on brain metabolism, neuroconnectivity and cognitive neuroscience and applied medical research of different neurological diseases, like Parkinson’s disease, dementia, stroke and depression.


Cognitive Science in Pozna≈Ñ, Poland

I recently had the pleasure of being invited as a guest speaker for the annual Poznan Cognition Forum, a Polish graduate conference in the cognitive sciences. Before I summarize the academic aspects of my trip, I think it’s worth sharing my experience exploring Poznan. As this post is a bit long I will split into two parts, the first relating my general experiences in Poland and the second summarizing my talk.

Part 1: Exploring Poznań

IMG_2447

Before arriving in Poland, I did my best to educate myself with a brief trip to wikipedia. Although I knew that the country had once held an impressive empire, and suffered greatly in the two World Wars, I was shocked to learn that they had been under Russian Communism prior to 1980. I guess it says something about American education that I didn’t know this, and I was glad to enter the country slightly less ignorant than before. Overall, my trip was a lovely mixture of business and pleasure; my hosts were extremely gracious (more on them in a bit) and as the other talks were all in Polish, they were kind enough to show me around the city on my free time. Poznan is beautiful, a city rich in stunning architecture and cobble-stone city squares that left me breathless and curious to see more.

While it may have just been the abundant fog and my crash-course wikipedia history lesson, the best way I can sum my experience of Poznan is that she presents the viewer with an intriguing mixture of imperial and old wold grandeur, laced with a quaint yet quietly stern specter of the former Soviet presence. Something about the ghostly imperial streets and plain stone architecture gives one that feeling that Poland is not wholly a western nation. Probing deeper, I found Renaissance era castles and multicolored homes, interlaced with stunning baroque churches glittering with intricate gold adornments. It was first taste of a culture that struck me as both curiously and charmingly alien.

While I love Denmark, Danish architecture can be a bit minimal and homogenous, so it was refreshing to be in a country with a diverse mix of architectural styles and historical backgrounds. Completing the trip was my wonderful hosts, the organizers and attendees of the 5th annual Poznan Cognition Forum.

As astonishing as the mix of old world and modern imperialist cultures I found in Poznan, the group of dedicated young cognitive scientists seemed more impressive still. Here was a small group of perhaps 10 to 15 extremely dedicated, bright, and ambitious researchers who had taken up the charge of establishing one of Poland’s first and only cognitive science research centers. As they related IMG_2439their frustrations I could not help but think of my own early experiences trying to break into cognitive science and being told I was chasing a fools’ errand that could never result in gainful employment.

From what they told me, Polish research politics remain highly conservative, nationally isolated, and disciplinary in nature. Bartoz, a charming researcher who seemed an everyman of practical and academic solutions (of which many where needed from him during my short stay) related to me how himself and another dedicated researcher/organizer, Aga, had fought tooth and nail for the establishment of a cognitive science degree program that had required little more than cooperation between the philosophy and psychology departments at Poznan University which continued to be hostile and unsupportive of their endeavors.

The research community I found in Poznan did not reflect a group down on it’s luck- these bright young minds reminded me more of the Rebel Alliance before the battle of Endor than any remember-the-Alamo martyrs. Confident in their cause and self-sufficient in its’ needs- in some cases even going so far as to go around the administration of their university to secure funds and equipment for a state-of-the-art eye tracking research facility- these researchers seemed poised for success. Not only were they fully capable of dealing with these everyday issues, they were impressively contemporary in their mastery of cognitive science, demonstrating a familiarity with both phenomenological and empirical research that kept me on my toes throughout my stay. I can only hope to work with them again in the future, as they are both eager and fully capable of joining the global research community. If there is one thing Cognitive Science can’t have enough of, it’s the Poznan brand of genuine competence and sober passion.

Organizers!

Link to my Picasa Album of the trip:

Poznan Album

Follow

Get every new post delivered to your Inbox.

Join 11,797 other followers

%d bloggers like this: