A Basis for Morality
Daniel J. Burdick
In humanity's quest to distinguish itself from the rest of the animal world, a claim to morality has long been one of its greatest arguments. Perhaps second only to reason as a distinguishing characteristic, the ability to exhibit moral behavior instills in humans a distinct sense of natural order. Yet when anthropocentric egoism is dropped and we recognize that we are not, in fact, much different from other animals, our distinctive moral behavior begs explanation. What compels the majority of humans to behave morally? Why do some not? How should we respond to behaviors that are either immoral or amoral? Do we possess an innate sense of morality, determined by our neurobiology? Or to put it another (more provocative) way, is morality absolute?
To begin understanding the causes of moral behavior, we must understand what we mean by morality. First, it must be emphasized that the specifics of morality - whether a given instance of behavior is moral - is not in question here. Abortion may or may not be moral; that is a very different question. At issue here is the question of why we choose one behavior over another on grounds of morality, and how it is possible that we can consider behaviors in terms of "right" and "wrong." This brings forth an assumption implicit in morality. In judging the morality or immorality of a behavior, we accept the assertion that the behavior is chosen by the actor. Any behavior that is not chosen is considered amoral, outside the realm of morality. Thus, free will is seen as a necessary predicate of morality. This assumption will be considered more thoroughly later.
Morality itself may be considered in two parts: behavior that directly affects other individuals and behavior that is purely individual yet is still evaluated on a moral level. By far, the first category is the more frequently considered and is what is most commonly meant by moral behavior; this is the "be kind to others" category. While specifics of morality vary across cultures, and often across individuals, some basic trends appear universally. Benevolence, fairness, and honesty - in short, those traits that make a social existence possible - are consistently named when people in many different cultures are asked to describe moral behavior (1) . This is the first cause to think that a basic morality may be innate in humans and not merely a product of cultural education, although it is not in itself proof of that possibility. With this evidence, it is just as reasonable to hypothesize that the basic similarities in social structure are sufficient to give rise to similarities in acceptable behavior as it is to hypothesize that ideas of acceptable behavior are innate.
The original source of morality aside for the moment, we can effectively consider the cause of moral behavior in individuals. Psychological literature tends to focus on the emotional basis of morality. One of the most important affects in determining moral behavior is empathy, an emotional response triggered by the apprehension of another's emotional state and characterized by having feelings similar to what the other is feeling (2) . This ability to perceive another individual's emotions is crucial to our ability to make moral decisions - that is, to make decisions that positively influence others. The neurobiology underlying empathy relies on both targeted and generalized control. Recognition of another's emotional state, a targeted mechanism in that it activates a specific region of the nervous system, appears to rely heavily on right cortical activity. As might be expected, the visual cortex plays a role in recognizing facial expression of emotion, but researchers have also proposed the involvement of right-hemisphere somatosensory cortices, the region of the brain responsible for interpreting sensory input from one's own body. Adolphs, et al. (3) , found that subjects with lesions in this region of the brain were less able to recognize emotions, and hypothesized that to recognize an emotion in another individual, the brain simulates the response it would have if one's own face were exhibiting the features seen on the other person. To the extent that empathy drives moral behavior - or, in very basic terms, to the extent that we choose to act in ways that make others happy because we know how it feels - this somatosensory response may offer a partial neurobiological mechanism for moral behavior.
Emotional responses also rely on generalized mechanisms, those that have a broad-based effect on the nervous system. Empathy, as well as the negative motivators of guilt and shame, derive most directly from the balance of neurotransmitters. Empathy has been associated with increased levels of serotonin, an effect that can be seen in response to antidepressant medication and, even more dramatically, to the illicit drug Ecstasy (4) . Similarly, guilt and shame are often accentuated in subjects found to have low levels of serotonin - those suffering from depression, for example (5) . Both provide possible biological explanations of moral behavior.
Beyond the emotional aspect of morality, however, is a decision-making ability crucial to prosocial behavior. This points to a frontal lobe involvement. Indeed, the most famous case of antisocial behavior is that of Phineas Gage, a 19th-century railroad foreman who suffered severe frontal lobe damage as a result of physical trauma. An explosion caused a 3.5-foot iron rod to pass through the left ventromedial region of Gage's frontal lobe (6) . As a result, the once social and placid Gage became impatient, unrestrained, and inconsiderate. He began exhibiting behavior that was, if not immoral, certainly not acceptable. The interesting implication of Gage's experience is that, rather than being actively promoted, moral behavior may be the result of cortical inhibition of certain behaviors. That's an attractive proposition because it makes morality seem just like a lot of other nervous system functions; much of the job of the motor cortex is to inhibit action, for example. This makes morality seem almost routine in terms of neurological function.
To the extent that morality describes behavior towards others, a sense of self - and a sense of others' selves - is also necessary. Without an understanding of our individuality and the effect that our behaviors have on other individuals, moral judgement would be impossible (2) . With this in mind, we may propose an additional purpose for the so-called I-function, that ability of the neocortex to create awareness, experience, and self. The I-function may allow us to make moral decisions by creating awareness of emotion and enabling us to predict and project emotions. This may well be a secondary evolutionary benefit of the I-function, but in a social species such as humans, it can well be imagined that this could have conferred a survival advantage; ostracism for immoral behavior could well have meant death for early humans.
This may work well to explain social morality, but what of behaviors that affect only the individual? As an example, why do some people consider sexual fantasies immoral? As long as no action is taken, thoughts can have no impact on others, yet in the Western cultural tradition, impure thoughts are a significant moral issue. This is probably best described by a cognitive approach. A few schools of thought exist to explain the development of morals. Among cognitive psychologists, Lawrence Kohlberg's model, developed in the 1950s, is still the basis for understanding. In this scheme, moral development is described as a progression through three levels, each with two stages (1) . Justification of moral behavior is seen to move from self-interest (punishment and reward) through social approval (being liked by others and following a prescribed social order) to abstract ideals (obligation of proper behavior and universal rights). This model, though, has been criticized for failing to predict people's actual behavior, despite its usefulness in assessing intellectual development (1) .
Of course, even if it were entirely successful in predicting behavior, there is nothing magical about this cognitive theory of morality. In fact, this fits well with our current understanding of the function of the nervous system and brings morality further into the realm of behavior explicable from a biological standpoint. What Kohlberg describes as moral development is distinct from moral behavior itself. The explanation that an individual provides for why he won't perform some immoral act surely does change with time and maturity, but these reasons are self-reported. That is, it is the I-function - the conscious part of the nervous system - that offers the explanation. If, as has been proposed (7) , the principal role of the I-function is to build models to understand the rest of the nervous system, then it stands to reason, and common experience confirms, that the model may change with new experiences. This does not, however, imply that the actual reason for moral behavior changes, only that our understanding of it does. This may very well mean that, if the understanding changes to incorporate abstract ideals, certain entirely private behaviors will be considered immoral. The underlying cause of moral behavior, though, may still be innate and unconsciously biological at its core.
It has also been proposed that morality is a learned behavior. Children, this model suggests, gain an understanding of right and wrong through "observation, imitation, and reward" (1) . Moral behavior, in this case, depends on external instruction and the presence of a reward. It is without a doubt true that specific elements of a moral code are learned; it has never been the contention that all of morality is innate, only that the basic aspects of it are. That morality may be the result of a simple external-stimulus learning process raises some intriguing implications, though. In general, it means that moral behavior may not be as special as we think it is. It has been demonstrated that a simple algorithm can cause a computer to learn to distinguish between two categories (8) . It is a small jump indeed, then, to say that a computer could be taught to distinguish between immoral and moral, given sufficient feedback. At best, this means moral behavior is merely a result of conditioning; at worst, it means free will should not be an issue in assessing whether an act is moral: if a computer, which does not have free will, can be taught to behave "morally," why should we make free will a condition for determining morality?
Behavior, though, is almost always a combination of innate properties and conditioning. If morality, then, is more than conditioning, what evidence do we have that it is innate? First, infants appear to exhibit empathic abilities, responding with tears when someone close is upset and with sounds of pleasure when someone close is happy (1) . Again, empathy's role in moral behavior may be a secondary evolutionary development; the ability to perceive another's emotions is a useful survival tool, since the other individual may be aware of some threat that you aren't. This is primarily speculation, but it does offer a plausible justification for having empathy - and the moral awareness that stems from empathy - inborn. The other emotions associated with morality - shame, guilt, and indignation - also appear early in a child's development (1) .
Indignation is of particular interest in considering a biological basis of morality. William Damon, in his article on childhood moral development, writes, "...young children can be outraged by the violation of social expectations, such as a breach in the rules of a favorite game...." He goes on to note that this sort of response appears in every culture (1) . This points to two salient features of moral behavior. First, it demonstrates that morality is again consistent with our understanding of the function of the nervous system. The nervous system is constantly evaluating input and comparing it to expectations; generally, most input is ignored by the I-function, but when input doesn't match expectations, the nervous system alerts the conscious and some sort of response is triggered. It has been theorized that this is the root of discomfort and pain, and even of phantom-limb syndrome (7) . Here, now, we see that moral indignation falls conveniently into that pattern.
This, in turn, suggests the second salient feature of moral behavior: that morality is a method of ordering the world outside the nervous system. One of the major functions of the nervous system in general and the I-function in particular is to build useful predictive models of the world and our response to it. Being able to predict events is clearly advantageous, in a social context as well as in a natural context - again, evidence and justification that a sense of moral order is innate.
Even evolutionary biology weighs in on the issue. Animals can display what we might call moral behavior, although the term in biology is "animal altruism." The fish that swim toward a predator, the birds that sound a warning call, the whales that help an injured member of the pod, all display behavior that is apparently beneficial to others but costly to themselves. Biologists explain this either in terms of game theory, in which it can be shown that cooperation leads to a higher total payoff in the long run than selfishness, or kin selection, in which the majority of the recipients are related to the altruist and thus pass along some part of the altruist's genome (9) . Most humans would probably like to draw a distinction between the altruistic behavior displayed by animals and moral behavior displayed by humans, and certainly, the same sort of decision-making does not go into animal behavior as goes into human behavior. Ultimately, though, we are subject to the same evolutionary pressures, and it seems reasonable that human moral behavior is closely related to animal altruism. We may not think that we make moral decisions so as to receive a favor in kind, but the impulse to behave morally may well have evolved as a result of that type of pressure. After all, daily human behavior is rife with motivations that we don't actually understand.
The observation that infants, animals, and computers can all display what looks like moral behavior forces us to revisit the question of whether free will is a necessary condition of moral judgement. This argument is not that free will does not exist; for a demonstration that we can choose behaviors, see Grobstein (10) . Rather, it's that free will is not a sufficient criterion to distinguish between moral/immoral and amoral behaviors. Furthermore, it is impossible to determine by observation whether free will is being exercised - or in other terms, whether the I-function is being involved - even in other humans. It seems, then, that all behavior must be categorized either as moral/immoral or as amoral, outside the bounds of morality. This conclusion has profound implications for, as an example, the punishment of apparently immoral behavior by actors who cannot be determined to be capable of moral "decisions" - that is, the youthful or the criminally insane.
That humans, and possibly non-human animals, have an innate sense of morality seems clear. At the very least, the rudiments of the emotions and social understanding that underlie morality are inborn, being displayed from an early age and across cultural divisions. Moreover, morality is easily understood in terms that the rest of neurobiology is understood: the attempt by the I-function to form useful models of the external world, the comparison of expectations to input, even the evolutionary advantage (both directly and indirectly through social interaction) that morality confers. This says nothing about whether morality is absolute, however. The distinction between the propensity for moral behavior and a codified morality must be well understood here. Specific behaviors are ultimately judged to be moral or immoral as a result of personal and cultural conditioning. Using this approach, then, it cannot be said that morality is absolute. Even more generally, it cannot be said that there exists a universal moral code, for that implies a morality inherent in nature. Even if we were to accept as absolute the propensity for altruistic morality, this cannot be extended to natural law. Indeed, it seems even less likely that morality exists throughout nature than that it exists throughout humanity. Life, as we all know, just is not fair.
1)The Moral Development of Children, by Damon William, in the August 1999 issue of Scientific American . (Unfortunately, the article itself is not online.)
2)Emotion, Regulation, and Moral Development , by Nancy Eisenberg. 2000. Annual Review of Psychology, 51:665-697.
3)A Role for Somatosensory Cortices in the Visual Recognition of Emotion as Revealed by Three-Dimensional Lesion Mapping, by R. Adolphs, H. Damasio, D. Tranel, G. Cooper, and A.R. Damasio. 2000. Journal of Neuroscience , 20(7):2683-2690.
4)Experiencing Ecstasy , by Matthew Klam. 2001. The New York Times Magazine, Jan 21.
5)Depression , from the National Institute for Mental Health.
6)No Longer Gage: A Glimpse into Sociability, Temperament, and the Brain , by Julia Johnson. 1998.
7)Notes for Biology 202: Neurobiology and Behavior , by Paul Grobstein. 2001.
8)Simple Networks, Simple Rules: Learning and Creating Categories , by Paul Grobstein. Applet by Bogdan Butoi. 1998.
9)Reciprocal Altruism: Cooperation Among Animals , by Joan Strassmann. 2000.
10)Free Will?  by Paul Grobstein. 2000.
Few would consider him after his impeachment as the most moral President the public has had; yet he may well be alone in having made such a remark, and if true, may, in fact, be the most moral President the nation has had except for his "mistake" identified and presented as immoral because of his adultery.
It begs the query of how the public defines morality at all, and given the conduct of priests in the Catholic church whose cases continue to be settled, humans may yet have much to learn about morality, and about empathy as morality.
It's possible that we train men to be callous and not have empathy, and consequently, more aggressive behavior is the outcome of such negligence. Are we then training men to be immoral? Could that be the source of our crimes and corruptable social values? ... Pat R, 29 July 2007