31 December 2011

Neuro-Reality-Check: Brief Replies to Contentious Claims (Part IV)


Fourth Assertion: Attaching the prefix ‘neuro’ to a discipline with a long-standing history creates a new sub-discipline.


“It is plainly contrary to the law of nature…that the privileged few should gorge themselves with superfluities, while the starving multitude are in want of the common necessaries of life”
Discourse on the Origins of Inequality -Jean-Jacques Rousseau


The word “neuro-historian” first appeared in the 1970 edition of Webb Haymaker and Francis Schiller’s The Founders of Neurology. Haymaker and Schiller applied this moniker to Jules Soury (1842-1915), who (along with Leonard Guthrie, Max Neuburger, and Fielding Garrison) was one of the first historians of the science and medicine of the nervous system. In this instance, Haymaker and Schiller used the word “neuro-historian” to demarcate an historian who had taken “neurology” as a special interest. It is not clear why they thought Soury the only historian deserving of this title. What is clear, however, is that Haymaker and Schiller were not seeking to carve out a new disciplinary space called “neuro-history.” 


Neuro-history, launched by among others Daniel Lord Smail, Iain McGilchrist, and David Lewis Williams in the first decade of the 21st century, and subsequently lauded by philosopher Steve Fuller (PDF), is a species of history-making that adopts an interdisciplinary style of thinking about historical narrative that assumes that certain neurobiological or genetic facts are universal, true, and therefore as meaningful as archeological findings, anthropological observations, or historical sources. In this sense, neuro-history is different from Charles C. Mann’s 1491, which pulls from a range of methodologies in the human sciences to reconstruct life in the Americas before Europeans arrived, because Mann does not assume the transcendence of biology and evolution.


Neuro-history's program is not to undo history per se, rather the program claims that science has established certain facts that can be meaningful when retrospectively read back into the historical record to establish a deep history. Thus, as Roger Cooter has critically reviewed, McGilchrist’s The Master and His Emissary: The Divided Brain and the Making of the Western World claims that the origins of modernity were a dialectic of the lateralization of the brain, where the left (logical) “master” hemisphere of the brain competes with its (empathetic) “emissary” on the right.  And McGilchrist’s monograph underscores the best critique of these attempts: As neurophilosophy pointed out a while ago: 
the notion that someone is "left-brained" or "right-brained" is absolute nonsense. All complex behaviours and cognitive functions require the integrated actions of multiple brain regions in both hemispheres of the brain. All types of information are probably processed in both the left and right hemispheres (perhaps in different ways, so that the processing carried out on one side of the brain complements, rather than substitutes, that being carried out on the other).


The flaw with the neuro-history approach is that the science often changes more rapidly than the professional historians who seek to master the science are aware, and indeed the historiography changes more rapidly than the professional physicians, scientists, and philosophers who write or argue for such histories bother to recognize. It would seem that neuro-history is doomed methodologically. But was it ever a discipline? For that matter, can we judge neuro-economics, neuro-aesthetics, neuro-philosophy, neuro-law, etc to be disciplines? (What follows is somewhat wonkish.)

29 December 2011

A Video Reminding Us Why We Love Nature


Murmuration from Sophie Windsor Clive on Vimeo.

We Must Stop Publication Monopolies

An article at Climate Shift (hat-tip Alice Bell) that describes the many-sides of science journalism today. One passage that is particularly eye catching:
...scientific publishers and societies, universities, science centers and museums, and interest groups are communicating directly with wider audiences, unmediated by journalists, often using narrative and presentation formats that were once the exclusive domain of news organizations, many even employing veteran science journalists as communication staffers. Scholars of science policy and communication, as well as critics and writers, are also producing science-related content directly online.
We need to develop an impact factor for blogs and media of this kind. I have in mind here some measure - beyond ephemeral professional service - that would quantify the impact such media has in real dollars, knowledge translation among disciplines, citation, and exposure for the organizations listed above. This impact factor should be detailed enough to provide, for example, Britain's Research Assessment Exercise (RAE)  or US tenure standards with metrics that can help universities assess the relative value of these contributions.


We have digital media. Why do we have analog gatekeepers?


All organizations like those listed above ought to move into the cheap pay-per-download publication game. Consider, for example, what Louis CK has accomplished in twelve days by publishing his own work and asking people to buy direct. Via Andrew Sullivan:
in twelve days thus far. He's quartering the money, using a quarter to pay for production costs, a quarter for large bonuses for his staff, a quarter for charity and he's keeping a quarter. To me this has been the most interesting online experiment since Radiohead's "In Rainbows".
Total revenue for two weeks: $1 million. Of course, most scholars in organizations such as those listed above do not have the media profile of mega stars like Louis CK. But many universities, museums, societies, and interest groups do: they can leverage their own institutional strengths, produce externally peer-reviewed research, and publish it directly themselves. 


Such a project would not be an immediate high revenue enterprise. But building a substantial database of desirable, high quality publications would almost certain produce long-term sources of revenue split between the author and the institutional host. For the humanities and social sciences in particular, such a model could easily bring in large sums of money. Fields like history and literature are intrinsically interesting. People would buy the books if they were not priced so high. As kindle and other readers become more commonplace, the likelihood that people would buy $7.00 books would grow higher and higher. Why should the old gate-keepers have a monopoly on that? 

27 December 2011

Neurosurgery at Washington University: Notes to Robert L Grubb's Excellent History

Robert L. Grubb, Neurosurgery at Washington University: A Century of Excellence (The Washington University, 2011).

Robert L. Grubb's study of the development of  neurosurgery at Washington University is a truly excellent work. As I was reading it, I took fairly extensive notes from the work. These may well interest regular readers. Where appropriate, I provide a bit of context. Those of you who are familiar with my interest in the historical work of Paul Forman and the advent of modern neuroscience, will note that I have emphasized certain themes. My comments below are in bold italics.

Reality Imitates Aphorisms from the Neuro-Reality-Check

This post by Neuroskeptic, entitled "scanning the brain while looking at scans", reminded me of this little quip from two weeks ago.

25 December 2011

Now altruism is bad!

Happy Holidays! Such claims to knowledge of the definition of altruism make me more than a little uneasy.

23 December 2011

Sociobiology's Challenge to Neuroskeptics


From E. O. Wilson (1975) Sociobiology: The New Synthesis:

Camus said that the only serious philosophical question is suicide. That is wrong even in the strict sense intended. The biologist, who is concerned with questions of physiology and evolutionary history, realizes that self-knowledge is constrained and shaped by the emotional control centers in the hypothalamus and limbic system of the brain. These centers flood our consciousness with all the emotions - hate, love, guilt, fear, and others - that are consulted by ethical philosophers who wish to intuit the standards of good and evil. What, we are then compelled to ask, made the hypothalamus and limbic system? They evolved by natural selection. That simple biological statement must be pursued to explain ethics and ethical philosophers, if not epistemology and epistemologists, at all depths. Self-existence, or the suicide that terminates it, is not the central question of philosophy. The hypothalamic-limbic complex automatically denies such logical reductions by countering it with feelings of guilt and altruism. In this one way the philosopher's own emotional control centers are wiser than his solipsist consciousness, "knowing" in evolutionary time the individual organism counts for almost nothing.

22 December 2011

The Science of Mind-Reading

Further evidence that mind-reading may soon be possible. It would really help my research if people would read this article and then comment here about the medium-term implications of such research in terms of politics, natural rights, and law. I keep imagining Philip K. Dick's Minority Report. Is it that bad really?

18 December 2011

Neurophilosophy sets off a great debate in Blog comments

Mo Costandi (Neurophilosophy) over at "The Guardian" describes an experiment that demonstrates (perhaps) that "Leaning to the left makes the Eiffel Tower seem smaller."

The really fascinating part, however, is the exchange in the comments section that takes place between Andrew D. Wilson and Rolf Zwaan. I also liked neuroskeptic's comment:

A philosophical question for anti-representationalists: Do you think any entity could have representations? Could we build a robot with representations? Or could we discover aliens with representations? If not, it seems that you are saying that representations are just impossible a priori. Which is fine, but then it's not an empirical question, it's a philosophical one. On the other hand, if you think that having representations is possible, but it's just that humans don't as a matter of fact have them - how would we tell the difference between ourselves, and an alien or robot who did have representations? What would they be able to do that we couldn't?
That's quite smart, isn't it?

Darwin’s Other Bulldog: Charles Kingsley and the Popularisation of Evolution in Victorian England

Charles Kingsley (1819-1875)
Piers Hale, who blogs at Political Descent, has recently published a fascinating article (pdf here) on "Darwin's Other Bulldog." For those of you who enjoyed The Origin of Species, Voyage of the Beagle, or Descent of Man, Hale's article will provide you with much to contemplate. Here's a teaser:
Kingsley’s efforts to promote Darwin at Cambridge were not confined to rabble-rousing among the undergraduates, however, and he also took pains to foster an environment of intellectual inquiry regarding both the scientific evidence for and the theological implications of evolution. He debated with those he found incredulous of Darwin’s theory, finding, as he later wrote to Darwin, that those who opposed Darwinism most vocally were those who knew the least, including his friends at College, the Lowndean Chair of Astronomy and Geometry, John Couch Adams and Arthur Cayley, who would, from 1863, become the first Sadleirian Professor—both were eminent mathematician-astronomers.  
Hale, moreover, usefully draws our attention to "the hazy nature of the distinction historians have drawn between men of science and popularisers of science in the early years of this period, and of its gradual hardening by the end of Kingsley’s life." He adds:
In the 1860s it was clearly as non-controversial for Kingsley to be instrumental in the foundation of the ‘Thorough Club’ as it was for Sam Wilberforce to be an active participant in the British Association meetings, or indeed, for Darwin to urge Kingsley on to write up his own big book.
It is precisely that hazy nature which defined so much of Victorian psychology, psychiatry, physiology, and neurology as well. However, it is also worth being mindful that the distinction is only apparent to us now. The perhaps most intriguing feature of knowledge in the 19th-century was that it had not become so professionalized that the argots of science made the possibility of science's communication impossible. In that respect, at least, it was a Golden Age. The opportunity to contemplate this issue is one of the best reasons to read Hale's latest essay.

17 December 2011

A Brilliant Blog: The Dispersal of Darwin

The Neuro Times would like to give two thumbs up to Michael D. Barton's blog The Dispersal of Darwin. He also has a nice piece on history of science blogging coming out in Endeavor. Especially good is his recent post: "what are the most common misconceptions about evolution."

My own favorite misconception:

Critiquing strands of evolutionary thinking as bad science = equals critiquing the whole theory of evolution as bad science. 

Many who have doubts/reservations about the extension of adaptation arguments to human behavior are arch defenders of evolutionary science. We merely ask for a healthy amount of skepticism about inferences drawn from scientific findings that seek to naturalize capitalism, religion, Marxism, altruism, love, etc. Let's not be historically or culturally naive.

16 December 2011

Blogging as Academic Tenure's Medusa: Kate Clancy Tells It Like It Is!


Caravaggio's Medusa (1597?)

It is stupid that we call blogging, blogging. It is dumb that Twitter is called Twitter. And, let's be frank, Facebook and Google sounds like something a 13 year-old would care about. Academics could re-purpose those words, a la Orwell, with a kind of "Newspeak" for these activities. I suppose we could call Blogging = Inquisitive-ing; Twittering = Tweeding: you get the idea. But no matter what we call it, we find that these activities have great advantages and are meaningful. 

Most of us who do these things don't aspire to become public intellectuals; we aspire to have conversations with our peers. We don't believe we will become Paul Krugman or Andrew Sullivan, but we may aspire to model their activities, because they do so well what most of us love about these activities. Krugman and Sullivan show us curiosity in action. And that's what most of us are doing, when we do these activities. And that's what many of us feel the need to defend when our colleagues ask: do you really have time for that? It is in that spirit that Kate Clancy takes up the issue, and there are many of us who can relate to her excellent points:

I submitted the first round of my materials for my third year review recently. The third year review is the half-way point between one’s hire as a tenure-track professor and going up for tenure. You can be fired at this point. But the most common outcome is that you get a strongly worded letter from the college detailing what you’ve done and what will be necessary from here on out if you want tenure. If you then don’t do as they say (get a grant, increase the number of publications, improve your teaching) then they have the grounds to deny you tenure.


If you’re in a supportive department as I am, then your third year committee’s job is to make the best possible case for you for when your case goes before the college. They pore over your curriculum vitae (academic resume), your papers, your teaching evaluations and research program. They observe your teaching, read your grants, and try to figure out how to articulate just how important you are to the department.

However, the job of an academic, and our expectations, are largely increasing. More papers are expected, more grants, even while teaching and service loads are increasing. And what it means to be an academic is changing. More online instruction actually means that teaching is more time-intensive – it takes a lot longer to build a week of good online material than it does to write a few lectures. Students no longer wait to talk to you after class, they email you at all hours – and will resend their email repeatedly if you don’t answer within 8-12 hours. Being slightly removed from our students but supposedly available 24 hours a day makes for a demoralizing, full inbox each and every morning.

But there are many wonderful things about how our jobs are changing, too. As depressed as end of semester emails make me, I am thrilled the other fourteen weeks of the semester because I can identify the ways in which my students have grasped basic skills and concepts better in my current blended teaching style, compared to the passive lectures they once received. For some academics, blogs and social media serve as both public outreach and scholarly work; for others they are an important place to give and receive mentorship. And the shrinking of many PhD programs mean undergraduate research experiences are on the rise as some of us look for other students to mentor, and I find these experiences especially rewarding.

Aphorisms from the Neuro-Reality-Check

If my brain feels, then why is it insensitive?


13 December 2011

Aphorisms from the Neuro-Reality-Check

fMRI shows the area of your brain that is in a state of suspended disbelief when viewing fMRI images.

12 December 2011

Neuro-Reality-Check: Brief Replies to Contentious Claims (Part III)

Third Assertion: Neuroscientists are not involved or complicit in a vast neo-liberal conspiracy.

Of course they are not. Who is? It would be more accurate to say that neuroscientific knowledge, like most systems of knowledge, finds ready disciples in fields as far from its domain as management, marketing, and political science.

Obviously, neuroscientists are not responsible for the economic context of Western economies. They are perhaps responsible for sometimes naturalizing economic and political claims into the material stuff of the nervous system. Or they are alternatively responsible for not dismissing such claims as outlandish and falling beyond the boundaries of knowledge that neuroscience can produce. But that is not the same thing as being at the forefront of some conspiracy – the idea of which is absurd.

The issue, however, is that some evolutionary psychologists, cognitive neuroscientists, neuropsychiatrists, and neurologists (as well as their agents, publishers, and producers) have been ready to trade on utopian or dystopian visions about the promise and potential of brain science. At the very least, to paraphrase Steve Fuller, these claims are so naïve as to require some sort of caveat by the neuroscientific community. And if not them, then by people like those attending the Neuro-Reality-Check. But don't ignore the neo-liberal underpinnings of neuroscience either or the similar desire by neo-liberals to use the vanguard science for specific economic ends.

Consider, for example, the claims recently made by Matthew E. May. Describing change as “never easy”, May describes how managers can use recent neuroscience to create better, more productive employees and organizations. As May spells out the problem:
…at some point we all run into the issue of creating or managing change: markets change, customer requirements change, competitors sneak up on us. Given the pace of change today, we’ll soon find ourselves at the back of the pack if we can’t establish and maintain a position of primacy. Doing just that drops us on the doorstep of leadership. That’s where neuroscience can help leaders, though. It can help business people lead and influence … “mindful change,” meaning change that “takes into account the physiological nature of the brain, and the ways in which it predisposes people to resist some forms of leadership, and accept others.”
According to May, “Thanks to neuroscientific discoveries, we can now safely make several conclusions about human behavior change that just a few years ago would have been labeled incorrect.”

"Nature" applauds bloggers

Writers at The Neuro Times would like to thank Nature for recognizing the important function that blogs can serve in critiquing both science and the media.


But science has a way to respond that others do not. Through online forums, blogs and Twitter, a cottage industry has grown up around instant criticism of dodgy scientific claims and dubious findings. This parallel journalism is increasingly coming to the attention of the mainstream press — as demonstrated by the rising number of stories in the press that were first broken by blogs. It may seem thankless at times, but the army of online commentators who point out the errors, the inconsistencies and the confounding factors, and from time to time just scream 'bullshit', have the power to hold the press to account. This ongoing war of attrition against those who would put their own agendas above the facts cannot take away their platform, but it can chip away at something they prize even more: their relevance, and with it their pernicious influence.

Aphorisms from the Neuro-Reality-Check


If you think you are your brain, get food poisoning.

11 December 2011

Neuro-Reality-Check: Brief Replies to Contentious Claims (Part II)


Second assertion: Neuroscience is a young science.

Francis O. Schmitt
(1903-1995)
Empirically there is little wrong with this assertion. If the origins of Francis Otto Schmitt’s neurosciences research program can be taken as an index, then the domain of neuroscience and a community of scientists and physicians identified with neuroscience work took shape sometime after 1940, and as the festschrift of Schmitt’s life puts it, events in the years 1966 and 1967 “served to crystallize the new field, the neurosciences” ( Worden et al 1975, p. xx). In so far as this discussion of beginnings matters, then it is empirically clear that neuroscience is of recent origins. To speak of neuroscience in the nineteenth century or earlier is to engage in a retrospective reconstruction.

However, this assertion of neuroscience’s comparative youth cannot be treated as wholly innocent. The words ‘youth’, ‘adolescence’, ‘immature’, or other synonyms that neuroscientists often deploy when speaking about new techniques, areas of work, divisions of labor, or the whole domain, imply that neuroscientists have some right to be naïve in their scientific method and interpretations.

“Metaphors of growth”, as Roger Cooter has termed teleological language of this ilk (Cooter 1993), when applied to specialties of medicine or disciplines of science, speak to rhetorical or ideological assumptions about the way progress occurs in science. When used by scientists, physicians, policy makers, and university and hospital administrator, such organic metaphors are usually offered as an excuse for imprudent claims, over-reaching promises, or as a justification for further time, funding, or infrastructure. The view is that eventually neuroscience will ‘flower’ into a mature area of expertise.

Most typically, neuroscientists appeal to ‘youthfulness’ when discussing scientific conclusions or the applications of neuroscience beyond the laboratory. In other words, ‘immaturity’ is really ‘impetuosity’ and therefore it is not anyone’s fault that the young science’s practitioners reach sometimes rather deeply and naively into such terrains as philosophy, ontology, and history.

Neuro-Reality-Check:Neuroscience is a young science.


Neuroscience is a young science.

Empirically there is little wrong with this assertion. If the origins of Frank Otto Schmitt’s neurosciences research program can be taken as an index, then the domain of neuroscience and a community of scientists and physicians identified with neuroscience work took shape sometime after 1940, and as the festschrift of Schmitt’s life puts it, events in the years 1966 and 1967 “served to crystallize the new field, the neurosciences” ( Worden et al 1975, p. xx). In-so-far as this discussion of beginnings matters, then it is empirically clear that neuroscience is of recent origins. To speak of neuroscience in the nineteenth century or earlier is to engage in a retrospective reconstruction.

However, this assertion of neuroscience’s comparative youth cannot be treated as wholly innocent. The words ‘youth’, ‘adolescence’, ‘immature’, or other synonyms that neuroscientists often deploy when speaking about new techniques, areas of work, divisions of labor, or the whole domain, imply that neuroscientists have some right to be naïve in the scientific method and interpretations.

“Metaphors of growth”, as Roger Cooter has termed teleological language of this ilk (Cooter 1993), when applied to specialties of medicine or disciplines of science, speaks to some rhetorical or ideological assumptions about the way progress occurs in science. When used by scientists, physicians, policy makers, and university and hospital administrator, such organic metaphors are usually offered as an excuse for imprudent claims, over-reaching promises, or as a justification for further time, funding, or infrastructure. The view is that eventually neuroscience will ‘flower’ into a mature area of expertise.

Most typically, neuroscientists appeal to ‘youthfulness’ when discussing scientific conclusions or the applications of neuroscience beyond the laboratory. In other words, ‘immaturity’ is really ‘impetuosity’ and therefore it is not anyone’s fault that the young science’s practitioners reach sometimes rather deeply and naively into such terrains as philosophy, ontology, and history.

Yet this claim of youth is also somewhat curious, when counterbalanced by alternative claims to a long history. With only a little effort, a long history for neuroscience can be reconstructed, either from the internet or in the historiography. Thus, do I see, for example, on my own bookshelf a volume subtitled “Essays in eighteenth-century neuroscience.” The history of neuroscience in this literature traces from Aristotle, Plato, and Hippocrates and through Galen, Da Vinci, and Vesalius to our modern day.  Thus may we detect an alternative set of claims to neuroscience’s impetuous youth – its great wisdom derives from its many Ages.

The slight-of-hand at work in making both short and long claims for neuroscience’s history comes from the way deep debates in philosophy, theology, and the human sciences are appropriated by neuroscience. Thus while any one can see that the nerves and brains, to which, for example, Hobbes refers in Leviathan are most certainly not the nerves and brains of twenty-first century neuroscience, Hobbes’ discussion of power, obedience, and authority are most certainly part of a longer discussion about political philosophy and critique which everyone, including neuroscientist, can claim to as knowledge within the Western tradition and canon.
By collapsing together the short and long history, neuroscientists who protest the youthfulness of their science, can nevertheless claim legitimacy for their science’s answers to debates long extant before their science’s time. That the potency of those debates derives from conditions and eventualities far different from neuroscience’s discussions of nerves and brains is quite beside the point. It is in the transcendence of politics, philosophy, and history that many sciences establish their contemporary purchase.

The long tradition is invented to establish legitimacy. The short tradition is expedient for disclaiming responsibility. Such tricks have long been used in the history of specialization, discipline-formation, and profession building. As a matter of making scientific communities, such strategies are invaluable. As a faithful record of history, they are inadequate. But as an excuse? Well, when the object of the science’s claims is in part to know what is in our heads and minds, to tell us what in human history really matters, to show us that universal art is our brains, indeed to claim that humans are little more than their brains, well, it is an excuse that falls short.  

10 December 2011

Neuro-Reality-Check: Brief Replies to Some Contentious Claims (Part I)

Assertion 1: Historians can’t really talk about neuroscience because they are not neuroscientists.


It is tempting to reply to this specious claim that if historians can’t talk about neuroscience, then neuroscientists can’t talk about history, anthropology, sociology, political science, economics, or art. At the least, one would think that this rule can be stated reciprocally and maintained with the same righteous indignation as the assertion.

And let me tell you: the offensiveness of the neuroscientist’s position was felt as keenly as the offense we apparently gave by questioning neuroscience. Not only apparently can neuroscientists tell humanists and social scientists what they can do; but moreover they can also talk about the humanities and social sciences all they want. Meanwhile we are supposed to keep silent about neuroscientific claims to knowledge.

Let us reconstruct this thinking: Historians, so the logic of this argument would go, don’t possess any professional monopoly on history. After all, from the point of view of the neuroscientist, it is possible to be self-taught in knowledge of biography/autobiography/French theory etc, and this knowledge is supposedly of the same order as that found in a dusty archive or through paleography, textual analysis, oral history, and prosopography. Notice that the whole understanding of historiography is absent here. The neuroscientist’s view is that history is supposedly social in its belonging to everyone, and that, according to the neuroscientist, includes neuroscientists.