12 jun 2013

The Moral Instinct (By Steven Pinker)


NYTimes - January 13, 2008




Which of the following people would you say is the most admirable: Mother Teresa, Bill Gates or Norman Borlaug? And which do you think is the least admirable? For most people, it’s an easy question. Mother Teresa, famous for ministering to the poor in Calcutta, has been beatified by the Vatican, awarded the Nobel Peace Prize and ranked in an American poll as the most admired person of the 20th century. Bill Gates, infamous for giving us the Microsoft dancing paper clip and the blue screen of death, has been decapitated in effigy in “I Hate Gates” Web sites and hit with a pie in the face. As for Norman Borlaug . . . who the heck is Norman Borlaug?




Yet a deeper look might lead you to rethink your answers. Borlaug, father of the “Green Revolution” that used agricultural science to reduce world hunger, has been credited with saving a billion lives, more than anyone else in history. Gates, in deciding what to do with his fortune, crunched the numbers and determined that he could alleviate the most misery by fighting everyday scourges in the developing world like malaria, diarrhea and parasites. Mother Teresa, for her part, extolled the virtue of suffering and ran her well-financed missions accordingly: their sick patrons were offered plenty of prayer but harsh conditions, few analgesics and dangerously primitive medical care.



It’s not hard to see why the moral reputations of this trio should be so out of line with the good they have done. Mother Teresa was the very embodiment of saintliness: white-clad, sad-eyed, ascetic and often photographed with the wretched of the earth. Gates is a nerd’s nerd and the world’s richest man, as likely to enter heaven as the proverbial camel squeezing through the needle’s eye. And Borlaug, now 93, is an agronomist who has spent his life in labs and nonprofits, seldom walking onto the media stage, and hence into our consciousness, at all.



I doubt these examples will persuade anyone to favor Bill Gates over Mother Teresa for sainthood. But they show that our heads can be turned by an aura of sanctity, distracting us from a more objective reckoning of the actions that make people suffer or flourish. It seems we may all be vulnerable to moral illusions the ethical equivalent of the bending lines that trick the eye on cereal boxes and in psychology textbooks. Illusions are a favorite tool of perception scientists for exposing the workings of the five senses, and of philosophers for shaking people out of the naïve belief that our minds give us a transparent window onto the world (since if our eyes can be fooled by an illusion, why should we trust them at other times?). Today, a new field is using illusions to unmask a sixth sense, the moral sense. Moral intuitions are being drawn out of people in the lab, on Web sites and in brain scanners, and are being explained with tools from game theory, neuroscience and evolutionary biology.




“Two things fill the mind with ever new and increasing admiration and awe, the oftener and more steadily we reflect on them,” wrote Immanuel Kant, “the starry heavens above and the moral law within.” These days, the moral law within is being viewed with increasing awe, if not always admiration. The human moral sense turns out to be an organ of considerable complexity, with quirks that reflect its evolutionary history and its neurobiological foundations.

These quirks are bound to have implications for the human predicament. Morality is not just any old topic in psychology but close to our conception of the meaning of life. Moral goodness is what gives each of us the sense that we are worthy human beings. We seek it in our friends and mates, nurture it in our children, advance it in our politics and justify it with our religions. A disrespect for morality is blamed for everyday sins and history’s worst atrocities. To carry this weight, the concept of morality would have to be bigger than any of us and outside all of us.

So dissecting moral intuitions is no small matter. If morality is a mere trick of the brain, some may fear, our very grounds for being moral could be eroded. Yet as we shall see, the science of the moral sense can instead be seen as a way to strengthen those grounds, by clarifying what morality is and how it should steer our actions.

The Moralization Switch


The starting point for appreciating that there is a distinctive part of our psychology for morality is seeing how moral judgments differ from other kinds of opinions we have on how people ought to behave. Moralization is a psychological state that can be turned on and off like a switch, and when it is on, a distinctive mind-set commandeers our thinking. This is the mind-set that makes us deem actions immoral (“killing is wrong”), rather than merely disagreeable (“I hate brussels sprouts”), unfashionable (“bell-bottoms are out”) or imprudent (“don’t scratch mosquito bites”).


The first hallmark of moralization is that the rules it invokes are felt to be universal. Prohibitions of rape and murder, for example, are felt not to be matters of local custom but to be universally and objectively warranted. One can easily say, “I don’t like brussels sprouts, but I don’t care if you eat them,” but no one would say, “I don’t like killing, but I don’t care if you murder someone.”



The other hallmark is that people feel that those who commit immoral acts deserve to be punished. Not only is it allowable to inflict pain on a person who has broken a moral rule; it is wrong not to, to “let them get away with it.” People are thus untroubled in inviting divine retribution or the power of the state to harm other people they deem immoral. Bertrand Russell wrote, “The infliction of cruelty with a good conscience is a delight to moralists — that is why they invented hell.”




We all know what it feels like when the moralization switch flips inside us — the righteous glow, the burning dudgeon, the drive to recruit others to the cause. The psychologist Paul Rozin has studied the toggle switch by comparing two kinds of people who engage in the same behavior but with different switch settings. Health vegetarians avoid meat for practical reasons, like lowering cholesterol and avoiding toxins. Moral vegetarians avoid meat for ethical reasons: to avoid complicity in the suffering of animals. By investigating their feelings about meat-eating, Rozin showed that the moral motive sets off a cascade of opinions. Moral vegetarians are more likely to treat meat as a contaminant — they refuse, for example, to eat a bowl of soup into which a drop of beef broth has fallen. They are more likely to think that other people ought to be vegetarians, and are more likely to imbue their dietary habits with other virtues, like believing that meat avoidance makes people less aggressive and bestial.




Much of our recent social history, including the culture wars between liberals and conservatives, consists of the moralization or amoralization of particular kinds of behavior. Even when people agree that an outcome is desirable, they may disagree on whether it should be treated as a matter of preference and prudence or as a matter of sin and virtue. Rozin notes, for example, that smoking has lately been moralized. Until recently, it was understood that some people didn’t enjoy smoking or avoided it because it was hazardous to their health. But with the discovery of the harmful effects of secondhand smoke, smoking is now treated as immoral. Smokers are ostracized; images of people smoking are censored; and entities touched by smoke are felt to be contaminated (so hotels have not only nonsmoking rooms but nonsmoking floors). The desire for retribution has been visited on tobacco companies, who have been slapped with staggering “punitive damages.”



At the same time, many behaviors have been amoralized, switched from moral failings to lifestyle choices. They include divorce, illegitimacy, being a working mother, marijuana use and homosexuality. Many afflictions have been reassigned from payback for bad choices to unlucky misfortunes. There used to be people called “bums” and “tramps”; today they are “homeless.” Drug addiction is a “disease”; syphilis was rebranded from the price of wanton behavior to a “sexually transmitted disease” and more recently a “sexually transmitted infection.”




This wave of amoralization has led the cultural right to lament that morality itself is under assault, as we see in the group that anointed itself the Moral Majority. In fact there seems to be a Law of Conservation of Moralization, so that as old behaviors are taken out of the moralized column, new ones are added to it. Dozens of things that past generations treated as practical matters are now ethical battlegrounds, including disposable diapers, I.Q. tests, poultry farms, Barbie dolls and research on breast cancer. Food alone has become a minefield, with critics sermonizing about the size of sodas, the chemistry of fat, the freedom of chickens, the price of coffee beans, the species of fish and now the distance the food has traveled from farm to plate.




Many of these moralizations, like the assault on smoking, may be understood as practical tactics to reduce some recently identified harm. But whether an activity flips our mental switches to the “moral” setting isn’t just a matter of how much harm it does. We don’t show contempt to the man who fails to change the batteries in his smoke alarms or takes his family on a driving vacation, both of which multiply the risk they will die in an accident. Driving a gas-guzzling Hummer is reprehensible, but driving a gas-guzzling old Volvo is not; eating a Big Mac is unconscionable, but not imported cheese or crème brûlée. The reason for these double standards is obvious: people tend to align their moralization with their own lifestyles.




Reasoning and Rationalizing
It’s not just the content of our moral judgments that is often questionable, but the way we arrive at them. We like to think that when we have a conviction, there are good reasons that drove us to adopt it. That is why an older approach to moral psychology, led by Jean Piaget and Lawrence Kohlberg, tried to document the lines of reasoning that guided people to moral conclusions. But consider these situations, originally devised by the psychologist Jonathan Haidt: Julie is traveling in France on summer vacation from college with her brother Mark. One night they decide that it would be interesting and fun if they tried making love. Julie was already taking birth-control pills, but Mark uses a condom, too, just to be safe. They both enjoy the sex but decide not to do it again. They keep the night as a special secret, which  makes them feel closer to each other. What do you think about that — was it O.K. for them to make love?



A woman is cleaning out her closet and she finds her old American flag. She doesn’t want the flag anymore, so she cuts it up into pieces and uses the rags to clean her bathroom. A family’s dog is killed by a car in front of their house. They heard that dog meat was delicious, so they cut up the dog’s body and cook it and eat it for dinner. Most people immediately declare that these acts are wrong and then grope to justify why they are wrong. It’s not so easy. In the case of Julie and Mark, people raise the possibility of children with birth defects, but they are reminded that the couple were diligent about contraception. They suggest that the siblings will be emotionally hurt, but the story makes it clear that they weren’t. They submit that the act would offend the community, but then recall that it was kept a secret. Eventually many people admit, “I don’t know, I can’t explain it, I just know it’s wrong.” People don’t generally engage in moral reasoning, Haidt argues, but moral rationalization: they begin with the conclusion, coughed up by an unconscious emotion, and then work backward to a plausible justification.


The gap between people’s convictions and their justifications is also on display in the favorite new sandbox for moral psychologists, a thought experiment devised by the philosophers Philippa Foot and Judith Jarvis Thomson called the Trolley Problem. On your morning walk, you see a trolley car hurtling down the track, the conductor slumped over the controls. In the path of the trolley are five men working on the track, oblivious to the danger. You are standing at a fork in the track and can pull a lever that will divert the trolley onto a spur, saving the five men. Unfortunately, the trolley would then run over a single worker who is laboring on the spur. Is it permissible to throw the switch, killing one man to save five? Almost everyone says “yes.”



Consider now a different scene. You are on a bridge overlooking the tracks and have spotted the runaway trolley bearing down on the five workers. Now the only way to stop the trolley is to throw a heavy object in its path. And the only heavy object within reach is a fat man standing next to you. Should you throw the man off the bridge? Both dilemmas present you with the option of sacrificing one life to save five, and so, by the utilitarian standard of what would result in the greatest good for the greatest number, the two dilemmas are morally equivalent. But most people don’t see it that way: though they would pull the switch in the first dilemma, they would not heave the fat man in the second. When pressed for a reason, they can’t come up with anything coherent, though moral philosophers haven’t had an easy time coming up with a relevant difference, either.



When psychologists say “most people” they usually mean “most of the two dozen sophomores who filled out a questionnaire for beer money.” But in this case it means most of the 200,000 people from a hundred countries who shared their intuitions on a Webbased experiment conducted by the psychologists Fiery Cushman and Liane Young and the biologist Marc Hauser. A difference between the acceptability of switch-pulling and manheaving, and an inability to justify the choice, was found in respondents from Europe, Asia and North and South America; among men and women, blacks and whites, teenagers and octogenarians, Hindus, Muslims, Buddhists, Christians, Jews and atheists; people with elementary-school educations and people with Ph.D.’s.
Joshua Greene, a philosopher and cognitive neuroscientist, suggests that evolution equipped people with a revulsion to manhandling an innocent person. This instinct, he suggests, tends to overwhelm any utilitarian calculus that would tot up the lives saved and lost. The impulse against roughing up a fellow human would explain other examples in which people abjure killing one to save many, like euthanizing a hospital patient to harvest his organs and save five dying patients in need of transplants, or throwing someone out of a crowded lifeboat to keep it afloat. By itself this would be no more than a plausible story, but Greene teamed up with the cognitive neuroscientist Jonathan Cohen and several Princeton colleagues to peer into people’s brains using functional M.R.I. They sought to find signs of a conflict between brain areas associated with emotion (the ones that recoil from harming someone) and áreas dedicated to rational analysis (the ones that calculate lives lost and saved).


When people pondered the dilemmas that required killing someone with their bare hands, several networks in their brains lighted up. One, which included the medial (inward-facing) parts of the frontal lobes, has been implicated in emotions about other people. A second, the dorsolateral (upper and outer-facing) surface of the frontal lobes, has been implicated in ongoing mental computation (including nonmoral reasoning, like deciding whether to get somewhere by plane or train). And a third region, the anterior cingulate cortex (an evolutionarily ancient strip lying at the base of the inner surface of each cerebral hemisphere), registers a conflict between an urge coming from one part of the brain and an advisory coming from another.


But when the people were pondering a hands-off dilemma, like switching the trolley onto the spur with the single worker, the brain reacted differently: only the area involved in rational calculation stood out. Other studies have shown that neurological patients who have blunted emotions because of damage to the frontal lobes become utilitarians: they think it makes perfect sense to throw the fat man off the bridge. Together, the findings corroborate  Greene’s theory that our nonutilitarian intuitions come from the victory of an emotional
impulse over a cost-benefit analysis.



A Universal Morality?
The findings of trolleyology — complex, instinctive and worldwide moral intuitions — led Hauser and John Mikhail (a legal scholar) to revive an analogy from the philosopher John Rawls between the moral sense and language. According to Noam Chomsky, we are born with a “universal grammar” that forces us to analyze speech in terms of its gramatical structure, with no conscious awareness of the rules in play. By analogy, we are born with a universal moral grammar that forces us to analyze human action in terms of its moral
structure, with just as little awareness.


The idea that the moral sense is an innate part of human nature is not far-fetched. A list of human universals collected by the anthropologist Donald E. Brown includes many moral concepts and emotions, including a distinction between right and wrong; empathy; fairness; admiration of generosity; rights and obligations; proscription of murder, rape and other forms of violence; redress of wrongs; sanctions for wrongs against the community; shame; and taboos.


The stirrings of morality emerge early in childhood. Toddlers spontaneously offer toys and help to others and try to comfort people they see in distress. And according to the psychologists Elliot Turiel and Judith Smetana, preschoolers have an inkling of the difference between societal conventions and moral principles. Four-year-olds say that it is not O.K. to wear pajamas to school (a convention) and also not O.K. to hit a little girl for no
reason (a moral principle). But when asked whether these actions would be O.K. if the teacher allowed them, most of the children said that wearing pajamas would now be fine but that hitting a little girl would still not be.
Though no one has identified genes for morality, there is circumstantial evidence they exist. The character traits called “conscientiousness” and “agreeableness” are far more correlated in identical twins separated at birth (who share their genes but not their environment) than in adoptive siblings raised together (who share their environment but not their genes).

People given diagnoses of “antisocial personality disorder” or “psychopathy” show signs of morality blindness from the time they are children. They bully younger children, torture animals, habitually lie and seem incapable of empathy or remorse, often despite normal family backgrounds. Some of these children grow up into the monsters who bilk elderly people out of their savings, rape a succession of women or shoot convenience-store clerks lying on the floor during a robbery. Though psychopathy probably comes from a genetic predisposition, a milder version can be caused by damage to frontal regions of the brain (including the areas that inhibit intact people from throwing the hypothetical fat man off the bridge). The neuroscientists Hanna
and Antonio Damasio and their colleagues found that some children who sustain severe injuries to their frontal lobes can grow up into callous and irresponsible adults, despite normal intelligence. They lie, steal, ignore punishment, endanger their own children and can’t think through even the simplest moral dilemmas, like what two people should do if they disagreed on which TV channel to watch or whether a man ought to steal a drug to sabe his dying wife.

The moral sense, then, may be rooted in the design of the normal human brain. Yet for all the awe that may fill our minds when we reflect on an innate moral law within, the idea is at best incomplete. Consider this moral dilemma: A runaway trolley is about to kill a schoolteacher. You can divert the trolley onto a sidetrack, but the trolley would trip a switch sending a signal to a class of 6-year-olds, giving them permission to name a teddy bear Muhammad. Is it permissible to pull the lever?

This is no joke. Last month a British woman teaching in a private school in Sudan allowed her class to name a teddy bear after the most popular boy in the class, who bore the name of the founder of Islam. She was jailed for blasphemy and threatened with a public flogging, while a mob outside the prison demanded her death. To the protesters, the woman’s life clearly had less value than maximizing the dignity of their religion, and their judgment on whether it is right to divert the hypothetical trolley would have differed from ours. Whatever
grammar guides people’s moral judgments can’t be all that universal. Anyone who stayed awake through Anthropology 101 can offer many other examples.

Of course, languages vary, too. In Chomsky’s theory, languages conform to an abstract blueprint, like having phrases built out of verbs and objects, while the details vary, like whether the verb or the object comes first. Could we be wired with an abstract spec sheet that embraces all the strange ideas that people in different cultures moralize?


The Varieties of Moral Experience

When anthropologists like Richard Shweder and Alan Fiske survey moral concerns across the globe, they find that a few themes keep popping up from amid the diversity. People everywhere, at least in some  circumstances and with certain other folks in mind, think it’s bad to harm others and good to help them. They have a sense of fairness: that one should reciprocate favors, reward benefactors and punish cheaters. They value loyalty to a group, sharing and solidarity among its members and conformity to its norms. They believe that it is right to defer to legitimate authorities and to respect people with high status. And they exalt purity, cleanliness and sanctity while loathing defilement, contamination and carnality.

The exact number of themes depends on whether you’re a lumper or a splitter, but Haidt counts five — harm, fairness, community (or group loyalty), authority and purity — and suggests that they are the primary colors of our moral sense. Not only do they keep reappearing in cross-cultural surveys, but each one tugs on the moral intuitions of people in our own culture. Haidt asks us to consider how much money someone would have to pay us to do hypothetical acts like the following:

Stick a pin into your palm.
Stick a pin into the palm of a child you don’t know. (Harm.)
Accept a wide-screen TV from a friend who received it at no charge because of a computer error.
Accept a wide-screen TV from a friend who received it from a thief who had stolen it from a wealthy family. (Fairness.)
Say something bad about your nation (which you don’t believe) on a talk-radio show in your nation.
Say something bad about your nation (which you don’t believe) on a talk-radio show in a foreign nation. (Community.)
Slap a friend in the face, with his permission, as part of a comedy skit. Slap your minister in the face, with his permission, as part of a comedy skit. (Authority.)
Attend a performance-art piece in which the actors act like idiots for 30 minutes, including flubbing simple problems and falling down on stage.
Attend a performance-art piece in which the actors act like animals for 30 minutes,
including crawling around naked and urinating on stage. (Purity.)

In each pair, the second action feels far more repugnant. Most of the moral illusions we have visited come from an unwarranted intrusion of one of the moral spheres into our judgments. A violation of community led people to frown on using an old flag to clean a bathroom. Violations of purity repelled the people who judged the morality of consensual incest and prevented the moral vegetarians and nonsmokers from tolerating the slightest trace of a vile contaminant. At the other end of the scale, displays of extreme purity lead people to venerate religious leaders who dress in white and affect an aura of chastity and asceticism.


The Genealogy of Morals
The five spheres are good candidates for a periodic table of the moral sense not only because they are ubiquitous but also because they appear to have deep evolutionary roots. The impulse to avoid harm, which gives trolley ponderers the willies when they consider throwing a man off a bridge, can also be found in rhesus monkeys, who go hungry rather than pull a chain that delivers food to them and a shock to another monkey. Respect for authority is clearly related to the pecking orders of dominance and appeasement that are widespread in the animal kingdom. The purity-defilement contrast taps the emotion of disgust that is triggered by potential disease vectors like bodily effluvia, decaying flesh and unconventional forms of meat, and by risky sexual practices like incest.

The other two moralized spheres match up with the classic examples of how altruism can evolve that were worked out by sociobiologists in the 1960s and 1970s and made famous by Richard Dawkins in his book “The Selfish Gene.” Fairness is very close to what scientists call reciprocal altruism, where a willingness to be nice to others can evolve as long as the favor helps the recipient more than it costs the giver and the recipient returns the favor when fortunes reverse. The analysis makes it sound as if reciprocal altruism comes out of a robotlike calculation, but in fact Robert Trivers, the biologist who devised the theory, argued that it is implemented in the brain as a suite of moral emotions. Sympathy prompts a person to offer the first favor, particularly to someone in need for whom it would go the furthest.

Anger protects a person against cheaters who accept a favor without reciprocating, by impelling him to punish the ingrate or sever the relationship. Gratitude impels a beneficiary to reward those who helped him in the past. Guilt prompts a cheater in danger of being found out to repair the relationship by redressing the misdeed and advertising that he will behave better in the future (consistent with Mencken’s definition of conscience as “the inner voice which warns us that someone might be looking”). Many experiments on who helps whom, who likes whom, who punishes whom and who feels guilty about what have confirmed these predictions.

Community, the very different emotion that prompts people to share and sacrifice without an expectation of payback, may be rooted in nepotistic altruism, the empathy and solidarity we feel toward our relatives (and which evolved because any gene that pushed an organism to aid a relative would have helped copies of itself sitting inside that relative). In humans, of course, communal feelings can be lavished on nonrelatives as well. Sometimes it pays people (in an evolutionary sense) to love their companions because their interests are yoked, like spouses with common children, in-laws with common relatives, friends with common tastes or allies with common enemies. And sometimes it doesn’t pay them at all, but their kinshipdetectors have been tricked into treating their groupmates as if they were relatives by tactics like kinship metaphors (blood brothers, fraternities, the fatherland), origin myths, communal meals and other bonding rituals.


Juggling the Spheres


All this brings us to a theory of how the moral sense can be universal and variable at the same time. The five moral spheres are universal, a legacy of evolution. But how they are ranked in importance, and which is brought in to moralize which area of social life — sex, government, commerce, religion, diet and so on — depends on the culture. Many of the flabbergasting practices in faraway places become more intelligible when you recognize that
the same moralizing impulse that Western elites channel toward violations of harm and fairness (our moral obsessions) is channeled elsewhere to violations in the other spheres. Think of the Japanese fear of nonconformity (community), the holy ablutions and dietary restrictions of Hindus and Orthodox Jews (purity), the outrage at insulting the Prophet among Muslims (authority). In the West, we believe that in business and government, fairness should trump community and try to root out nepotism and cronyism. In other parts of the world this is incomprehensible — what heartless creep would favor a perfect stranger
over his own brother?

The ranking and placement of moral spheres also divides the cultures of liberals and conservatives in the United States. Many bones of contention, like homosexuality, atheism and one-parent families from the right, or racial imbalances, sweatshops and executive pay from the left, reflect different weightings of the spheres. In a large Web survey, Haidt found that liberals put a lopsided moral weight on harm and fairness while playing down group loyalty, authority and purity. Conservatives instead place a moderately high weight on all five. It’s not surprising that each side thinks it is driven by lofty ethical values and that the other side is base and unprincipled.

Reassigning an activity to a different sphere, or taking it out of the moral spheres altogether, isn’t easy. People think that a behavior belongs in its sphere as a matter of sacred necessity
and that the very act of questioning an assignment is a moral outrage. The psychologist Philip Tetlock has shown that the mentality of taboo — a conviction that some thoughts are sinful to think — is not just a superstition of Polynesians but a mind-set that can easily be triggered in college-educated Americans. Just ask them to think about applying the sphere of reciprocity to relationships customarily governed by community or authority. When Tetlock asked subjects for their opinions on whether adoption agencies should place children with the couples willing to pay the most, whether people should have the right to sell their organs and whether they should be able to buy their way out of jury duty, the subjects not only disagreed but felt personally insulted and were outraged that anyone would raise the question.

The institutions of modernity often question and experiment with the way activities are assigned to moral spheres. Market economies tend to put everything up for sale. Science amoralizes the world by seeking to understand phenomena rather than pass judgment on them. Secular philosophy is in the business of scrutinizing all beliefs, including those entrenched by authority and tradition. It’s not surprising that these institutions are often seen to be morally corrosive.


Is Nothing Sacred?

And “morally corrosive” is exactly the term that some critics would apply to the new science of the moral sense. The attempt to dissect our moral intuitions can look like an attempt to debunk them. Evolutionary psychologists seem to want to unmask our noblest motives as ultimately self-interested — to show that our love for children, compassion for the unfortunate and sense of justice are just tactics in a Darwinian struggle to perpetuate our genes. The explanation of how different cultures appeal to different spheres could lead to a spineless relativism, in which we would never have grounds to criticize the practice of another culture, no matter how barbaric, because “we have our kind of morality and they have theirs.” And the whole enterprise seems to be dragging us to an amoral nihilism, in which morality itself would be demoted from a transcendent principle to a figment of our neural circuitry.

In reality, none of these fears are warranted, and it’s important to see why not. The first misunderstanding involves the logic of evolutionary explanations. Evolutionary biologists sometimes anthropomorphize DNA for the same reason that science teachers find it useful to have their students imagine the world from the viewpoint of a molecule or a beam of light. One shortcut to understanding the theory of selection without working through the math is to imagine that the genes are little agents that try to make copies of themselves. Unfortunately, the meme of the selfish gene escaped from popular biology books and mutated into the idea that organisms (including people) are ruthlessly self-serving. And this doesn’t follow. Genes are not a reservoir of our dark unconscious wishes. “Selfish” genes are perfectly compatible with selfless organisms, because a gene’s metaphorical goal of selfishly replicating itself can be implemented by wiring up the brain of the organism to do unselfish things, like being nice to relatives or doing good deeds for needy strangers. When a mother  stays up all night comforting a sick child, the genes that endowed her with that tenderness were “selfish” in a metaphorical sense, but by no stretch of the imagination is she being selfish. Nor does reciprocal altruism — the evolutionary rationale behind fairness — imply that people do good deeds in the cynical expectation of repayment down the line. We all know of unrequited good deeds, like tipping a waitress in a city you will never visit again and falling on a grenade to save platoonmates. These bursts of goodness are not as anomalous to a biologist as they might appear.

In his classic 1971 article, Trivers, the biologist, showed how natural selection could push in the direction of true selflessness. The emergence of tit-for-tat reciprocity, which lets organisms trade favors without being cheated, is just a first step. A favor-giver not only has to avoid blatant cheaters (those who would accept a favor but not return it) but also prefer generous reciprocators (those who return the biggest favor they can afford) over stingy ones
(those who return the smallest favor they can get away with). Since it’s good to be chosen as a recipient of favors, a competition arises to be the most generous partner around. More accurately, a competition arises to appear to be the most generous partner around, since the favor-giver can’t literally read minds or see into the future. A reputation for fairness and generosity becomes an asset.

Now this just sets up a competition for potential beneficiaries to inflate their reputations without making the sacrifices to back them up. But it also pressures the favor-giver to develop ever-more-sensitive radar to distinguish the genuinely generous partners from the hypocrites. This arms race will eventually reach a logical conclusion. The most effective way
to seem generous and fair, under harsh scrutiny, is to be generous and fair. In the long run, then, reputation can be secured only by commitment. At least some agents evolve to be genuinely high-minded and self-sacrificing — they are moral not because of what it brings them but because that’s the kind of people they are.

Of course, a theory that predicted that everyone always sacrificed themselves for another’s good would be as preposterous as a theory that predicted that no one ever did. Alongside the niches for saints there are niches for more grudging reciprocators, who attract fewer and
poorer partners but don’t make the sacrifices necessary for a sterling reputation. And both may coexist with outright cheaters, who exploit the unwary in one-shot encounters. An ecosystem of niches, each with a distinct strategy, can evolve when the payoff of each strategy depends on how many players are playing the other strategies. The human social environment does have its share of generous, grudging and crooked characters, and the genetic variation in personality seems to bear the fingerprints of this evolutionary process.



Is Morality a Figment?

So a biological understanding of the moral sense does not entail that people are calculating
maximizers of their genes or self-interest. But where does it leave the concept of morality itself? Here is the worry. The scientific outlook has taught us that some parts of our subjective experience are products of our biological makeup and have no objective counterpart in the world. The qualitative difference between red and green, the tastiness of fruit and foulness of carrion, the scariness of heights and prettiness of flowers are design features of our common nervous system, and if our species had evolved in a different ecosystem or if we were missing a few genes, our reactions could go the other way. Now, if the distinction between right and wrong is also a product of brain wiring, why should we believe it is any more real than the distinction between red and green? And if it is just a collective hallucination, how could we argue that evils like genocide and slavery are wrong for everyone, rather than just distasteful to us?

Putting God in charge of morality is one way to solve the problem, of course, but Plato made
short work of it 2,400 years ago. Does God have a good reason for designating certain acts
as moral and others as immoral? If not — if his dictates are divine whims — why should we
take them seriously? Suppose that God commanded us to torture a child. Would that make
it all right, or would some other standard give us reasons to resist? And if, on the other hand, God was forced by moral reasons to issue some dictates and not others — if a command to torture a child was never an option — then why not appeal to those reasons directly?

This throws us back to wondering where those reasons could come from, if they are more than just figments of our brains. They certainly aren’t in the physical world like wavelength or mass. The only other option is that moral truths exist in some abstract Platonic realm, there for us to discover, perhaps in the same way that mathematical truths (according to most mathematicians) are there for us to discover. On this analogy, we are born with a rudimentary concept of number, but as soon as we build on it with formal mathematical reasoning, the nature of mathematical reality forces us to discover some truths and not others. (No one who understands the concept of two, the concept of four and the concept of
addition can come to any conclusion but that 2 + 2 = 4.) Perhaps we are born with a rudimentary moral sense, and as soon as we build on it with moral reasoning, the nature of moral reality forces us to some conclusions but not others.


Moral realism, as this idea is called, is too rich for many philosophers’ blood. Yet a diluted version of the idea — if not a list of cosmically inscribed Thou-Shalts, then at least a few If-
Thens — is not crazy. Two features of reality point any rational, self-preserving social agent
in a moral direction. And they could provide a benchmark for determining when the judgments of our moral sense are aligned with morality itself.

One is the prevalence of nonzero-sum games. In many arenas of life, two parties are objectively better off if they both act in a nonselfish way than if each of them acts selfishly. You and I are both better off if we share our surpluses, rescue each other’s children in danger and refrain from shooting at each other, compared with hoarding our surpluses while they rot, letting the other’s child drown while we file our nails or feuding like the Hatfields and McCoys. Granted, I might be a bit better off if I acted selfishly at your expense and you played the sucker, but the same is true for you with me, so if each of us tried for these advantages, we’d both end up worse off. Any neutral observer, and you and I if we could talk it over rationally, would have to conclude that the state we should aim for is the one in which we both are unselfish. These spreadsheet projections are not quirks of brain wiring, nor are they dictated by a supernatural power; they are in the nature of things.

The other external support for morality is a feature of rationality itself: that it cannot depend on the egocentric vantage point of the reasoner. If I appeal to you to do anything that affects
me — to get off my foot, or tell me the time or not run me over with your car — then I can’t do it in a way that privileges my interests over yours (say, retaining my right to run you over with my car) if I want you to take me seriously. Unless I am Galactic Overlord, I have to state my case in a way that would force me to treat you in kind. I can’t act as if my interests are special just because I’m me and you’re not, any more than I can persuade you that the spot I am standing on is a special place in the universe just because I happen to be standing on it.

Not coincidentally, the core of this idea — the interchangeability of perspectives — keeps reappearing in history’s best-thought-through moral philosophies, including the Golden Rule (itself discovered many times); Spinoza’s Viewpoint of Eternity; the Social Contract of Hobbes, Rousseau and Locke; Kant’s Categorical Imperative; and Rawls’s Veil of Ignorance. It also underlies Peter Singer’s theory of the Expanding Circle — the optimistic proposal that our moral sense, though shaped by evolution to overvalue self, kin and clan, can propel us on a path of moral progress, as our reasoning forces us to generalize it to larger and larger circles of sentient beings.

Doing Better by Knowing Ourselves



Morality, then, is still something larger than our inherited moral sense, and the new science of the moral sense does not make moral reasoning and conviction obsolete. At the same time, its implications for our moral universe are profound.

At the very least, the science tells us that even when our adversaries’ agenda is most baffling, they may not be amoral psychopaths but in the throes of a moral mind-set that appears to them to be every bit as mandatory and universal as ours does to us. Of course, some adversaries really are psychopaths, and others are so poisoned by a punitive moralization that they are beyond the pale of reason. (The actor Will Smith had many historians on his side when he recently speculated to the press that Hitler thought he was acting morally.) But in any conflict in which a meeting of the minds is not completely hopeless, a recognition that the other guy is acting from moral rather than venal reasons can be a first patch of common ground. One side can acknowledge the other’s concern for community or stability or fairness or dignity, even while arguing that some other value should trump it in that instance. With affirmative action, for example, the opponents can be seen as arguing from a sense of fairness, not racism, and the defenders can be seen as acting from a concern with community, not bureaucratic power. Liberals can ratify conservatives’ concern with families while noting that gay marriage is perfectly consistent with that concern.
The science of the moral sense also alerts us to ways in which our psychological makeup can get in the way of our arriving at the most defensible moral conclusions. The moral sense, we are learning, is as vulnerable to illusions as the other senses. It is apt to confuse morality per se with purity, status and conformity. It tends to reframe practical problems as moral crusades and thus see their solution in punitive aggression. It imposes taboos that make certain ideas indiscussible. And it has the nasty habit of always putting the self on the side of the angels.
Though wise people have long reflected on how we can be blinded by our own sanctimony, our public discourse still fails to discount it appropriately. In the worst cases, the thoughtlessness of our brute intuitions can be celebrated as a virtue. In his influential essay “The Wisdom of Repugnance,” Leon Kass, former chair of the President’s Council on Bioethics, argued that we should disregard reason when it comes to cloning and other biomedical technologies and go with our gut: “We are repelled by the prospect of cloning human beings . . . because we intuit and feel, immediately and without argument, the violation of things that we rightfully hold dear. . . . In this age in which everything is held to be permissible so long as it is freely done . . . repugnance may be the only voice left that speaks up to defend the central core of our humanity. Shallow are the souls that have   forgotten how to shudder.”
There are, of course, good reasons to regulate human cloning, but the shudder test is not one of them. People have shuddered at all kinds of morally irrelevant violations of purity in their culture: touching an untouchable, drinking from the same water fountain as a Negro, allowing Jewish blood to mix with Aryan blood, tolerating sodomy between consenting men.
And if our ancestors’ repugnance had carried the day, we never would have had autopsies, vaccionations,







blood transfusions, artificial insemination, organ transplants and in vitro fertilization, all of which were denounced as immoral when they were new.

There are many other issues for which we are too quick to hit the moralization button and
look for villains rather than bug fixes. What should we do when a hospital patient is killed by
a nurse who administers the wrong drug in a patient’s intravenous line? Should we make it
easier to sue the hospital for damages? Or should we redesign the IV fittings so that it’s
physically impossible to connect the wrong bottle to the line?


And nowhere is moralization more of a hazard than in our greatest global challenge. The threat of human-induced climate change has become the occasion for a moralistic revival meeting. In many discussions, the cause of climate change is overindulgence (too many S.U.V.’s) and defilement (sullying the atmosphere), and the solution is temperance (conservation) and expiation (buying carbon offset coupons). Yet the experts agree that these numbers don’t add up: even if every last American became conscientious about his or
her carbon emissions, the effects on climate change would be trifling, if for no other reason
than that two billion Indians and Chinese are unlikely to copy our born-again abstemiousness. Though voluntary conservation may be one wedge in an effective carbonreduction pie, the other wedges will have to be morally boring, like a carbon tax and new energy technologies, or even taboo, like nuclear power and deliberate manipulation of the ocean and atmosphere. Our habit of moralizing problems, merging them with intuitions of purity and contamination, and resting content when we feel the right feelings, can get in the way of doing the right thing.

Far from debunking morality, then, the science of the moral sense can advance it, by allowing us to see through the illusions that evolution and culture have saddled us with and to focus on goals we can share and defend. As Anton Chekhov wrote, “Man will become better when you show him what he is like.”


Steven Pinker is the Johnstone Family Professor of Psychology at Harvard University and
the author of “The Language Instinct” and “The Stuff of Thought: Language as a Window Into Human Nature.”





Copyright 2008 The New York Times Company

The Moral Life of Babies (by Paul Bloom)

Source:


Not long ago, a team of researchers watched a 1-year-old boy take justice into his own hands.

The boy had just seen a puppet show in which one puppet played with a ball while interacting with two other puppets. The center puppet would slide the ball to the puppet on the right, who would pass it back. And the center puppet would slide the ball to the puppet on the left . . . who would run away with it. Then the two puppets on the ends were brought down from the stage and set before the toddler. Each was placed next to a pile of treats. At this point, the toddler was asked to take a treat away from one puppet. Like most children in this situation, the boy took it from the pile of the “naughty” one. But this punishment wasn’t enough — he then leaned over and smacked the puppet in the head.

This incident occurred in one of several psychology studies that I have been involved with at the Infant Cognition Center at Yale University in collaboration with my colleague (and wife), Karen Wynn, who runs the lab, and a graduate student, Kiley Hamlin, who is the lead author of the studies. We are one of a handful of research teams around the world exploring the moral life of babies.

Like many scientists and humanists, I have long been fascinated by the capacities and inclinations of babies and children. The mental life of young humans not only is an interesting topic in its own right; it also raises — and can help answer — fundamental questions of philosophy and psychology, including how biological evolution and cultural experience conspire to shape human nature. In graduate school, I studied early language development and later moved on to fairly traditional topics in cognitive development, like how we come to understand the minds of other people — what they know, want and experience.

But the current work I’m involved in, on baby morality, might seem like a perverse and misguided next step. Why would anyone even entertain the thought of babies as moral beings? From Sigmund Freud to Jean Piaget to Lawrence Kohlberg, psychologists have long argued that we begin life as amoral animals. One important task of society, particularly of parents, is to turn babies into civilized beings — social creatures who can experience empathy, guilt and shame; who can override selfish impulses in the name of higher principles; and who will respond with outrage to unfairness and injustice. Many parents and educators would endorse a view of infants and toddlers close to that of a recent Onion headline: “New Study Reveals Most Children Unrepentant Sociopaths.” If children enter the world already equipped with moral notions, why is it that we have to work so hard to humanize them?

A growing body of evidence, though, suggests that humans do have a rudimentary moral sense from the very start of life. With the help of well-designed experiments, you can see glimmers of moral thought, moral judgment and moral feeling even in the first year of life. Some sense of good and evil seems to be bred in the bone. Which is not to say that parents are wrong to concern themselves with moral development or that their interactions with
their children are a waste of time. Socialization is critically important. But this is not because babies and young children lack a sense of right and wrong; it’s because the sense of right and wrong that they naturally possess diverges in important ways from what we adults would want it to be.

Smart Babies


Babies seem spastic in their actions, undisciplined in their attention. In 1762, Jean-Jacques Rousseau called the baby “a perfect idiot,” and in 1890 William James famously described a baby’s mental life as “one great blooming, buzzing confusion.” A sympathetic parent might see the spark of consciousness in a baby’s large eyes and eagerly accept the popular claim that babies are wonderful learners, but it is hard to avoid the impression that they begin as ignorant as bread loaves. Many developmental psychologists will tell you that the ignorance of human babies extends well into childhood. For many years the conventional view was that young humans take a surprisingly long time to learn basic facts about the physical world (like that objects continue to exist once they are out of sight) and basic facts about people (like that they have beliefs and desires and goals) — let alone how long it takes them to learn about morality.

I am admittedly biased, but I think one of the great discoveries in modern psychology is that this view of babies is mistaken.

A reason this view has persisted is that, for many years, scientists weren’t sure how to go about studying the mental life of babies. It’s a challenge to study the cognitive abilities of any creature that lacks language, but human babies present an additional difficulty, because, even compared to rats or birds, they are behaviorally limited: they can’t run mazes or peck at levers. In the 1980s, however, psychologists interested in exploring how much babies know began making use of one of the few behaviors that young babies can control: the movement of their eyes. The eyes are a window to the baby’s soul. As adults do, when babies see something that they find interesting or surprising, they tend to look at it longer than they would at something they find uninteresting or expected. And when given a choice between two things to look at, babies usually opt to look at the more pleasing thing. You can use “looking time,” then, as a rough but reliable proxy for what captures babies’ attention: what babies are surprised by or what babies like.

The studies in the 1980s that made use of this methodology were able to discover surprising things about what babies know about the nature and workings of physical objects — a baby’s “naïve physics.” Psychologists — most notably Elizabeth Spelke and Renée Baillargeon — conducted studies that essentially involved showing babies magic tricks, events that seemed to violate some law of the universe: you remove the supports from beneath a block and it floats in midair, unsupported; an object disappears and then reappears in another location; a box is placed behind a screen, the screen falls backward into empty space. Like adults, babies tend to linger on such scenes — they look longer at them than at scenes that are identical in all regards except that they don’t violate physical laws. This suggests that babies have expectations about how objects should behave. A vast body of research now suggests that — contrary to what was taught for decades to legions of psychology undergraduates — babies think of objects largely as adults do, as connected masses that move as units, that are solid and subject to gravity and that move in continuous paths through space and time.

Other studies, starting with a 1992 paper by my wife, Karen, have found that babies can do rudimentary math with objects. The demonstration is simple. Show a baby an empty stage. Raise a screen to obscure part of the stage. In view of the baby, put a Mickey Mouse doll behind the screen. Then put another Mickey Mouse doll behind the screen. Now drop the screen. Adults expect two dolls — and so do 5-month-olds: if the screen drops to reveal one or three dolls, the babies look longer, in surprise, than they do if the screen drops to reveal
two.

A second wave of studies used looking-time methods to explore what babies know about the minds of others — a baby’s “naïve psychology.” Psychologists had known for a while that even the youngest of babies treat people different from inanimate objects. Babies like to look at faces; they mimic them, they smile at them. They expect engagement: if a moving object becomes still, they merely lose interest; if a person’s face becomes still, however, they become distressed.

But the new studies found that babies have an actual understanding of mental life: they have some grasp of how people think and why they act as they do. The studies showed that, though babies expect inanimate objects to move as the result of push-pull interactions, they expect people to move rationally in accordance with their beliefs and desires: babies show  surprise when someone takes a roundabout path to something he wants. They expect someone who reaches for an object to reach for the same object later, even if its location has changed. And well before their 2nd birthdays, babies are sharp enough to know that other people can have false beliefs. The psychologists Kristine Onishi and Renée Baillargeon have found that 15-month-olds expect that if a person sees an object in one box, and then the object is moved to another box when the person isn’t looking, the person will later reach into the box where he first saw the object, not the box where it actually is. That is, toddlers have a mental model not merely of the world but of the world as understood by someone else.

These discoveries inevitably raise a question: If babies have such a rich understanding of objects and people so early in life, why do they seem so ignorant and helpless? Why don’t they put their knowledge to more active use? One possible answer is that these capacities are the psychological equivalent of physical traits like testicles or ovaries, which are formed in infancy and then sit around, useless, for years and years. Another possibility is that babies do, in fact, use their knowledge from Day 1, not for action but for learning. One lesson from the study of artificial intelligence (and from cognitive science more generally) is that an empty head learns nothing: a system that is capable of rapidly absorbing information needs to have some prewired understanding of what to pay attention to and what generalizations to make. Babies might start off smart, then, because it enables them to get smarter.

Nice Babies


Psychologists like myself who are interested in the cognitive capacities of babies and toddlers are now turning our attention to whether babies have a “naïve morality.” But there is reason to proceed with caution. Morality, after all, is a different sort of affair than physics or psychology. The truths of physics and psychology are universal: objects obey the same physical laws everywhere; and people everywhere have minds, goals, desires and beliefs. But the existence of a universal moral code is a highly controversial claim; there is considerable evidence for wide variation from soiety to society.

In the journal Science a couple of months ago, te psychologist Joseph Henrich and several of his colleagues reported a cross-cultural study of 15 diverse populations and found that people’s propensities to behave kindly to strangers and to punish unfairness are strongest in large-scale communities with market economies, where such norms are essential to the smooth functioning of trade. Henrich and his colleagues concluded that much of the
morality that humans possess is a consequence of the culture in which they are raised, not their innate capacities.

At the same time, though, people everywhere have some sense of right and wrong. You won’t find a society where people don’t have some notion of fairness, don’t put some value on loyalty and kindness, don’t distinguish between acts of cruelty and innocent mistakes, don’t categorize people as nasty or nice. These universals make evolutionary sense. Since natural selection works, at least in part, at a genetic level, there is a logic to being instinctively kind to our kin, whose survival and well-being promote the spread of our genes. More than that, it is often beneficial for humans to work together with other humans, which means that it would have been adaptive to evaluate the niceness and nastiness of other individuals. All this is reason to consider the innateness of at least basic moral concepts.


In addition, scientists know that certain compassionate feelings and impulses emerge early and apparently universally in human development. These are not moral concepts, exactly, but they seem closely related. One example is feeling pain at the pain of others. In his book “The Expression of the Emotions in Man and Animals,” Charles Darwin, a keen observer of human nature, tells the story of how his first son, William, was fooled by his nurse into expressing sympathy at a very young age: “When a few days over 6 months old, his nurse pretended to cry, and I saw that his face instantly assumed a melancholy expression, with the corners of his mouth strongly depressed.”


There seems to be something evolutionarily ancient to this empathetic response. If you want to cause a rat distress, you can expose it to the screams of other rats. Human babies, notably, cry more to the cries of other babies than to tape recordings of their own crying, suggesting that they are responding to their awareness of someone else’s pain, not merely to a certain pitch of sound. Babies also seem to want to assuage the pain of others: once they have enough physical competence (starting at about 1 year old), they soothe others in distress by stroking and touching or by handing over a bottle or toy. There are individual differences, to be sure, in the intensity of response: some babies are great soothers; others don’t care as much. But the basic impulse seems common to all. (Some other primates behave similarly: the primatologist Frans de Waal reports that chimpanzees “will approach a victim of attack, put an arm around her and gently pat her back or groom her.” Monkeys, on the other hand, tend to shun victims of aggression.)


Some recent studies have explored the existence of behavior in toddlers that is “altruistic” in an even stronger sense — like when they give up their time and energy to help a stranger accomplish a difficult task. The psychologists Felix Warneken and Michael Tomasello have put toddlers in situations in which an adult is struggling to get something done, like opening a cabinet door with his hands full or trying to get to an object out of reach. The toddlers tend to spontaneously help, even without any prompting, encouragement or reward.


Is any of the above behavior recognizable as moral conduct? Not obviously so. Moral ideas seem to involve much more than mere compassion. Morality, for instance, is closely related  to notions of praise and blame: we want to reward what we see as good and punish what we see as bad. Morality is also closely connected to the ideal of impartiality — if it’s immoral for you to do something to me, then, all else being equal, it is immoral for me to do the same thing to you. In addition, moral principles are different from other types of rules or laws: they cannot, for instance, be overruled solely by virtue of authority. (Even a 4-year-old knows not only that unprovoked hitting is wrong but also that it would continue to be wrong even if a teacher said that it was O.K.) And we tend to associate morality with the possibility of free and rational choice; people  choose to do good or evil. To hold someone responsible for an act means that we believe that he could have chosen to act otherwise.

Moral-Baby Experiments


So what do babies really understand about morality? Our first experiments exploring this question were done in collaboration with a postdoctoral researcher named Valerie Kuhlmeier (who is now an associate professor of psychology at Queen’s University in Ontario). Building on previous work by the psychologists David and Ann Premack, we began by investigating what babies think about two particular kinds of action: helping and hindering.

Our experiments involved having children watch animated movies of geometrical characters with faces. In one, a red ball would try to go up a hill. On some attempts, a yellow square got behind the ball and gently nudged it upward; in others, a green triangle got in front of it and pushed it down. We were interested in babies’ expectations about the ball’s attitudes — what would the baby expect the ball to make of the character who helped it and the one who hindered it? To find out, we then showed the babies additional movies in which the ball
either approached the square or the triangle. When the ball approached the triangle (the hinderer), both 9- and 12-month-olds looked longer than they did when the ball approached the square (the helper). This was consistent with the interpretation that the former action surprised them; they expected the ball to approach the helper. A later study, using somewhat  different stimuli, replicated the finding with 10-month-olds, but found that 6-month-olds seem to have no expect ations at all. (This effect is robust only when the animated characters have faces; when they are simple faceless figures, it is apparently harder for babies to interpret what they are seeing as a social interaction.). This experiment was designed to explore babies’ expectations about social interactions, not
their moral capacities per se. But if you look at the movies, it’s clear that, at least to adult eyes, there is some latent moral content to the situation: the triangle is kind of a jerk; the square is a sweetheart. So we set out to investigate whether babies make the same judgments out the characters that adults do. Forget about how babies expect the ball to act toward the other characters; what do babies themselves think about the square and the triangle? Do they prefer the good guy and dislike the bad guy?

Here we began our more focused investigations into baby morality. For these studies, parents took their babies to the Infant Cognition Center, which is within one of the Yale psychology buildings. (The center is just a couple of blocks away from where  Stanley Milgram did his famous experiments on obedience in the early 1960s, tricking New Haven residents into believing that they had severely harmed or even killed strangers with electrical shocks.) The parents were told about what was going to happen and filled out consent forms, which described the study, the risks to the baby (minimal) and the benefits to the baby (minimal, though it is a nice-enough experience). Parents often asked, reasonably enough, if they would learn how their baby does, and the answer was no. This sort of study provides no clinical or educational feedback about individual babies; the findings make sense only when computed as a group.

For the experiment proper, a parent will carry his or her baby into a small testing room. A typical experiment takes about 15 minutes. Usually, the parent sits on a chair, with the baby on his or her lap, though for some studies, the baby is strapped into a high chair with the parent standing behind. At this point, some of the babies are either sleeping or too fussy to continue; there will then be a short break for the baby to wake up or calm down, but on average this kind of study ends up losing about a quarter of the subjects. Just as critics describe much of experimental psychology as the study of the American college undergraduate who wants to make some extra money or needs to fulfill an Intro Psych requirement, there’s some truth to the claim that this developmental work is a science of the interested and alert baby.

In one of our first studies of moral evaluation, we decided not to use two-dimensional animated movies but rather a three-dimensional display in which real geometrical objects, manipulated like puppets, acted out the helping/hindering situations: a yellow square would help the circle up the hill; a red triangle would push it down. After showing the babies the scene, the experimenter placed the helper and the hinderer on a tray and brought them to the child. In this instance, we opted to record not the babies’ looking time but rather which character they reached for, on the theory that what a baby reaches for is a reliable indicator of what a baby wants. In the end, we found that 6- and 10-month-old infants overwhelmingly preferred the helpful individual to the hindering individual. This wasn’t a   subtle statistical trend; just about all the babies reached for the good guy. (Experimental minutiae: What if babies simply like the color red or prefer squares or something like that? To control for this, half the babies got the yellow square as the helper; half got it as the hinderer. What about problems of unconscious cueing and unconscious bias? To avoid this, at the moment when the two characters were offered on the tray, the parent had his or her eyes closed, and the experimenter holding out the characters and recording the responses hadn’t seen the puppet show, so he or she didn’t know who was the good guy and who the bad guy.)

One question that arose with these experiments was how to understand the babies’ preference: did they act as they did because they were attracted to the helpful individual or because they were repelled by the hinderer or was it both? We explored this question in a further series of studies that introduced a neutral character, one that neither helps nor hinders. We found that, given a choice, infants prefer a helpful character to a neutral one; and prefer a neutral character to one who hinders. This finding indicates that both inclinations are at work — babies are drawn to the nice guy and repelled by the mean guy. Again, these results were not subtle; babies almost always showed this pattern of response. Does our research show that babies believe that the helpful character is
good and the hindering character is bad? Not necessarily. All that we can safely infer from what the babies reached for is that babies prefer the good guy and show an aversion to the bad guy. But what’s exciting here is that these preferences are based on how one individual treated another, on whether one individual was helping another individual achieve its goals or hindering it. This is preference of a very special sort; babies were responding to behaviors that adults would describe as nice or mean. When we showed these scenes to much older kids — 18-month-olds — and asked them, “Who was nice? Who was good?” and “Who was mean? Who was bad?” they responded as adults would, identifying the helper as nice and the hinderer as mean.

To increase our confidence that the babies we studied were really responding to niceness and naughtiness, Karen Wynn and Kiley Hamlin, in a separate series of studies, created different sets of one-act morality plays to show the babies. In one, an individual struggled to open a box; the lid would be partly opened but then fall back down. Then, on alternating trials, one puppet would grab the lid and open it all the way, and another puppet would jump on the box and slam it shut. In another study (the one I mentioned at the beginning of this article), a puppet would play with a ball. The puppet would roll the ball to another puppet, who would roll it back, and the first puppet would roll the ball to a different puppet who would run away with it. In both studies, 5-month-olds preferred the good guy — the one who helped to open the box; the one who rolled the ball back — to the bad guy. This all suggests that the babies we studied have a general appreciation of good and bad behavior, one that spans a range of actions.

A further question that arises is whether babies possess more subtle moral capacities than preferring good and avoiding bad. Part and parcel of adult morality, for instance, is the idea that good acts should meet with a positive response and bad acts with a negative response — justice demands the good be rewarded and the bad punished. For our next studies, we turned our attention back to the older babies and toddlers and tried to explore whether the preferences that we were finding had anything to do with moral judgment in this mature sense. In collaboration with Neha Mahajan, a psychology graduate student at Yale, Hamlin, Wynn and I exposed 21-month-olds to the good guy/bad guy situations described above, and we gave them the opportunity to reward or punish either by giving a treat to, or taking a treat from, one of the characters. We found that when asked to give, they tended to chose the positive character; when asked to take, they tended to choose the negative one.

Dispensing justice like this is a more elaborate conceptual operation than merely preferring good to bad, but there are still-more-elaborate moral calculations that adults, at least, can easily make. For example: Which individual would you prefer — someone who rewarded good guys and punished bad guys or someone who punished good guys and rewarded bad guys? The same amount of rewarding and punishing is going on in both cases, but by adult lights, one individual is acting justly and the other isn’t. Can babies see this, too? To find out, we tested 8-month-olds by first showing them a character who acted as a helper (for instance, helping a puppet trying to open a box) and then presenting a scene in which this helper was the target of a good action by one puppet and a bad action by another puppet. Then we got the babies to choose between these two puppets. That is, they had to choose between a puppet who rewarded a good guy versus a puppet who punished a good guy. Likewise, we showed them a character who acted as a hinderer (for example, keeping a puppet from opening a box) and then had them choose between a puppet who rewarded the bad guy versus one who punished the bad guy.
 
The results were striking. When the target of the action was itself a good guy, babies preferred the puppet who was nice to it. This alone wasn’t very surprising, given that the other studies found an overall preference among babies for those who act nicely. What was more interesting was what happened when they watched the bad guy being rewarded or punished. Here they chose the punisher. Despite their overall preference for good actors over bad, then, babies are drawn to bad actors when those actors are punishing bad behavior.

All of this research, taken together, supports a general picture of baby morality. It’s even possible, as a thought experiment, to ask what it would be like to see the world in the moral terms that a baby does. Babies probably have no conscious access to moral notions, no idea why certain acts are good or bad. They respond on a gut level. Indeed, if you watch the older babies during the experiments, they don’t act like impassive judges — they tend to smile and clap during good events and frown, shake their heads and look sad during the naughty events (remember the toddler who smacked the bad puppet). The babies’ experiences might be cognitively empty but emotionally intense, replete with strong feelings and strong desires.

But this shouldn’t strike you as an altogether alien experience: while we adults possess the additional critical capacity of being able to consciously reason about morality, we’re not otherwise that different from babies — our moral feelings are often instinctive. In fact, one discovery of contemporary research in social psychology and social neuroscience is the powerful emotional underpinning of what we once thought of as cool, untroubled, mature moral deliberation.

Is This the Morality We’re Looking For?


What do these findings about babies’ moral notions tell us about adult morality? Some scholars think that the very existence of an innate moral sense has profound implications. In 1869, Alfred Russel Wallace, who along with Darwin discovered natural selection, wrote that certain human capacities — including “the higher moral faculties” — are richer than what you could expect from a product of biological evolution. He concluded that some sort of godly force must intervene to create these capacities. (Darwin was horrified at this suggestion, writing to Wallace, “I hope you have not murdered too completely your own and my child.”)

A few years ago, in his book “What’s So Great About Christianity,” the social and cultural critic Dinesh D’Souza revived this argument. He conceded that evolution can explain our niceness in instances like kindness to kin, where the niceness has a clear genetic payoff, but he drew the line at “high altruism,” acts of entirely disinterested kindness. For D’Souza, “there is no Darwinian rationale” for why you would give up your seat for an old lady on a bus, an act of nice-guyness that does nothing for your genes. And what about those who donate blood  to strangers or sacrifice their lives for a worthy cause? D’Souza reasoned that these stirrings of conscience are best explained not by evolution or psychology but by “the voice of God within our souls.”

The evolutionary psychologist has a quick response to this: To say that a biological trait evolves for a purpose doesn’t mean that it always functions, in the here and now, for that purpose. Sexual arousal, for instance, presumably evolved because of its connection to making babies; but of course we can get aroused in all sorts of situations in which babymaking isn’t an option — for instance, while looking at pornography. Similarly, our impulse to help others has likely evolved because of the reproductive benefit that it gives us in certain contexts — and it’s not a problem for this argument that some acts of niceness that people perform don’t provide this sort of benefit. (And for what it’s worth, giving up a bus seat for an old lady, although the motives might be psychologically pure, turns out to be a coldbloodedly smart move from a Darwinian standpoint, an easy way to show off yourself as an attractively good person.)

The general argument that critics like Wallace and D’Souza put forward, however, still needs to be taken seriously. The morality of contemporary humans really does outstrip what evolution could possibly have endowed us with; moral actions are often of a sort that have no plausible relation to our reproductive success and don’t appear to be accidental byproducts of evolved adaptations. Many of us care about strangers in faraway lands, sometimes to the extent that we give up resources that could be used for our friends and family; many of us care about the fates of nonhuman animals, so much so that we deprive ourselves of pleasures like rib-eye steak and veal scaloppine. We possess abstract moral notions of equality and freedom for all; we see racism and sexism as evil; we reject slavery and genocide; we try to love our enemies. Of course, our actions typically fall short, often far short, of our moral principles, but these principles do shape, in a substantial way, the world that we live in. It makes sense then to marvel at the extent of our moral insight and to reject the notion that it can be explained in the language of natural selection. If this higher morality or higher altruism were found in babies, the case for divine creation would get just a bit stronger. But it is not present in babies. In fact, our initial moral sense appears to be biased toward our own kind. There’s plenty of research showing that babies have within-group preferences: 3-month-olds prefer the faces of the race that is most familiar to them to those of other races; 11-month-olds prefer individuals who share their own taste in food and expect these individuals to be nicer than those with different tastes; 12-month-olds prefer to learn from someone who speaks their own language over someone who speaks a foreign language. And studies with young children have found that once they are segregated into different groups —
even under the most arbitrary of schemes, like wearing different colored T-shirts — they eagerly favor their own groups in their attitudes and their actions.

The notion at the core of any mature morality is that of impartiality. If you are asked to justify your actions, and you say, “Because I wanted to,” this is just an expression of selfish desire. But explanations like “It was my turn” or “It’s my fair share” are potentially moral, because they imply that anyone else in the same situation could have done the same. This is the sort of argument that could be convincing to a neutral observer and is at the foundation of standards of justice and law. The philosopher Peter Singer has pointed out that this notion of impartiality can be found in religious and philosophical systems of morality, from the golden rule in Christianity to the teachings of Confucius to the political philosopher John Rawls’s landmark theory of justice. This is an insight that emerges within communities of intelligent, deliberating and negotiating beings, and it can override our parochial impulses. The aspect of morality that we truly marvel at — its generality and universality — is the product of culture, not of biology. There is no need to posit divine intervention. A fully developed morality is the product of cultural development, of the accumulation of ratinse that it’s incomplete, but in the deeper sense that when individuals and societies aspire toward an enlightened morality — one in which all beings capable of reason
and suffering are on an equal footing, where all people are equal — they are fighting with what children have from the get-go. The biologist  Richard Dawkins was right, then, when he said at the start of his book “The Selfish Gene,” “Be warned that if you wish, as I do, to build  society in which individuals cooperate generously and unselfishly toward a common good, you can expect little help from biological nature.” Or as a character in the Kingsley Amis novel “One Fat Englishman” puts it, “It was no wonder that people were so horrible when they started life as children.”

Morality, then, is a synthesis of the biological and the cultural, of the unlearned, the discovered and the invented. Babies possess certain moral foundations — the capacity and willingness to judge the actions of others, some sense of justice, gut responses to altruism and nastiness. Regardless of how smart we are, if we didn’t start with this basic apparatus, we would be nothing more than amoral agents, ruthlessly driven to pursue our self-interest. But our capacities as babies are sharply limited. It is the insights of rational individuals that make a truly universal and unselfish morality something that our species can aspire to.

Paul Bloom is a professor of psychology at Yale. His new book, “How Pleasure Works,” will be published next month.