Locating the Moral Brain

 
One consequence of the Enlightenment is that human beings have become a subject of scientific scrutiny.  Another consequence is that the sciences are regarded as hierarchically arranged.  Officially, the hierarchy is mereological.  We move from the tiny particles of physics, up to biology and chemistry, then to psychology and onto economics, and so on.  But the hierarchy is also valued.  Some sciences are said to be hard, others are soft, where degree of softness is correlated with kinship to the humanities.  Thus, there is an upward ascent from physics to sociology and cultural anthropology, which may be one small step away from poetry.   Physics is more respectable on this scheme.  Thus, in deciding how to study the mind, it is tempting to say we should reduce mentality to harder sciences, rather than getting softer.  Of course, it’s hard to shed any light on mentality by doing physics, though some have tried (witness the work on quantum approaches to consciousness and free will).  More promising is the reduction of mentality to biology.  Evolutionary approaches fit into this category, as do the ever-more-popular neurobiological approaches.

The effort to explain mind in terms of brain is premised on some remarkable successes.  We know, for example, that the brain is the physical structure that allows us to perceive, store memories, reason, and feel.  There is exquisitely detailed work linking neural processes to memory encoding, visual object recognition, attention, and the like.  In some sense, the mind just is the brain.   But this is a bit like saying a painting is just some pigmented oil on a canvas.  To study paintings, we should certainly look at painted canvases, but we must also look at the factors that lead to the creation of those canvases–the intentions of the artist, and the social context.  To study decontextualized brains is just as empty as studying decontextualized art.

To illustrate, consider morality.  Huge efforts have gone into studying what goes on in the brain when people make moral decisions.  For example, over the last decade, a couple dozen papers have been published with evidence linking moral judgment to emotions.  Such inferences require a lot of steps, but the basic methodology looks like this.  Put people in a scanner and ask them to make a moral judgment (for example, give people moral dilemmas to adjudicate or ask them whether a sentence expresses something that is permissible or wrong).  Then average the blood oxygen levels across these people and across the moral questions and compare those averages to blood oxygen levels during a “non-moral” comparison class.  The difference indicates which brain areas are metabolizing more during moral thought as compared to other kinds of thought.  These activations are then interpreted by considering what other tasks these brain areas show up in.  It turns out that the areas that distinguish moral cognition include areas that also show up when researchers compare emotional states to less-emotional states.  This suggests that morality involves the emotions.  (The inverse inference, that emotions involve moral judgment is also possible, but less likely since studies that induce emotions often use methods that have no obvious moral content).

This bit of brain science is of limited value on its own.  We need to confirm that the emotions we observe are actually playing a causal role in moral decision-making, and that can be done only by doing behavioral psychology.  For example, studies show that we can affect people’s moral judgments by inducing (or reducing) emotions.

But there is a less obvious limitation as well.  Just as we need psychology to establish what moral emotions do, we need other branches of science to figure our where those emotions come from. Evolutionary biology, a relatively hard science, is sometimes brought to bear on this task, but one strikingly obvious fact is that morals vary across cultural contexts.  We don’t need to leave our communities to see that liberals and conservatives have different values, that there is variation between the affluent and the poor, and that even between women and men (in part because women are more liberal and less wealthy than their male counterparts).  If we look across time and space, the differences are even more pronounced.

Consider, for example, the blood sports of ancient Rome.  What could explain the Roman tolerance for killing-as-entertainment?  Is it a natural blood-lust inscribed in the brain?  Perhaps biology has something to do with it, but, given cultural variation, it’s more illuminating to conjecture that Rome was expanding its empire through conquest and intent on cultivating the “virtue” of fearlessness in the face of death.  What explains the prevalence of polygyny outside the Western world?  Does it result from an exclusively male desire to have multiple sex partners or is it rather a consequence of institutionalized male dominance?  The latter explanation is preferable, given that sexual arrangements vary cross culturally, that gender boundaries are culturally fluid, and that male and female chimps are equally promiscuous.  What about distributive justice?  Is the brain wired to be fair?  Perhaps, but conceptions of fairness vary cross culturally, with people in capitalist countries favoring distribution by “merit,” socialists preferring equal distribution, and people in impoverished countries preferring distribution by need.  Once we move beyond empty abstractions, such as fairness, we find spectacular variation, which can only be explained by factors such as demography, ecology, and, most importantly, social history.  Brain science is not going to explain why the values of American liberals and conservatives differ.  For that, we must understand sociological variables, such as urbanization and ethnicity, as well as historical events, such as the two great depressions that spawned liberal allegiance to welfarism, and the Cold War that infused otherwise libertarian conservatism with hawkish interventionism.  Morals are historical artifacts.

The upshot is that we cannot fully understand moral values by localizing morality in the brain.  Rather, we need to locate the brain.  We need to realize that the brain is situated in cultural settings, which have been historically shaped, and an upward ascent, from hard science to soft is required to fully explain what our values are and how they got that way.

Does this mean we should stop studying the moral brain?  Decidedly not.  Culture does not funnel into a blank slate.  Neuroscience can help us understand the psychological mechanisms by which norms are acquired, the way we compute costs and benefits, the emotions that ground values, and the separate steps that lead to moral judgments.  With a combination of neuroscience and psychology, we have learned that moral payoffs are computed algorithmically, that sexual norms and harm norms involve different emotions, and that attribution of intentions matters more for some norms than others. Neuroscience has lead to some surprises.  For example, Tania Singer found that women are less likely than men to show “reward activation” when wrongdoers are punished.  This gender difference is likely a result of social experience, but it was discovered in a brain scan.  Thus, neuroscience can guide social research.  But we need more of the converse: social inquiry guiding neuroscience.  For example, rather than localizing the moral brain, neuroscientists could study variation across moral brains, looking for differences across groups, and the mechanisms that allow such variation.

 

Jesse Prinz is Distinguished Professor of Philosophy at the Graduate Center, City University of New York.

jesse prinz