Book: The Agile Gene

Previous: 9. The Seven Meanings of “Gene”
Next: Epilogue: Homo stramineus: The Straw Man

Why wrestle with Kant’s God, Freedom, and immortality when it is only a matter of time before neuroscience, probably through brain imaging, reveals the actual physical mechanism that fabricates these mental constructs, these illusions?

Tom Wolfe

When genes were discovered, late in the second millennium of the Christian era, they found a place already prepared for them at the table of philosophy. They were the fates of ancient myth, the entrails of oracular prediction, the coincidences of astrology. They were destiny and determination, the enemies of choice. They were constraints on human freedom. They were the gods.

No wonder so many people were against them. Genes got stuck with the label “first cause.” Now that the genome is available for inspection, and genes can be seen at work, a much less terrifying picture is emerging. There are morals to be drawn from the nature–nurture debate, and in this chapter I intend to draw a few. They are mostly reassuring.


The first and most general moral is that genes are enablers, not constrainers. They create new possibilities for the organism; they do not reduce its options. Oxytocin receptor genes allow pair-bonding; without them the prairie vole would not have the option of forming a pair bond. CREB genes allow memory; without those genes, it would be impossible to learn and recall. BDNF allows the calibration of binocular vision through experience; without it, you could not so easily judge depth and see the world as three-dimensional. FOXP2 mysteriously allows human beings to acquire the language of their people; without it, you cannot learn to speak. And so on. These new possibilities are open to experience, not scripted in advance. Genes no more constrain human nature than extra programs constrain a computer. A computer with Word, Powerpoint, Acrobat, Internet Explorer, Photoshop, and the like not only can do more than a computer without these programs but can also get more from the outside world. It can open more files, find more Websites, and accept more e-mail.

Genes, unlike gods, are conditional. They are exquisitely good at simple if–then logic: if in a certain environment, then develop in a certain way. If the nearest moving object is a bearded professor, then that is what mothers look like. If reared in famine conditions, then develop a different body type. Girls reared in fatherless households experience earlier puberty—an effect that is made possible by some still mysterious set of genes. I suspect that science has so far greatly underestimated the number of gene sets which act in this way—conditioning their output to external conditions.

So here is the first moral of the tale: Don’t be frightened of genes. They are not gods; they are cogs.


In 1960 a graduate student at Harvard received a letter from George A. Miller, head of the department of psychology, dismissing her from the Ph.D. program because she was not up to the mark. Remember that name. Much later, stuck at home with chronic health problems, Judith Rich Harris took up writing psychology textbooks, books in which she faithfully relayed the dominant paradigm of psychology—that personality and much else was acquired from the environment. Then, 35 years after leaving Harvard, as an unemployed grandmother, having happily escaped academic indoctrination, she sat down and wrote an article, which she submitted to the prestigious Psychological Review. It was published to sensational acclaim. She was deluged with inquiries as to who she was. In 1997, on the strength of the article alone, she was given one of the top awards in psychology: the George A. Miller award.

The opening words of Harris’s article were:

Do parents have any important long-term effects on the development of their child’s personality? This article examines the evidence and concludes that the answer is no.

From about 1950 onward psychologists had studied what they called the socialization of children. Although they were initially disappointed to find few clear-cut correlations between parenting style and a child’s personality, they clung to the behaviorist assumption that parents were training their children’s characters by reward and punishment, and the Freudian assumption that many people’s psychological problems had been created by their parents. This assumption became so automatic that to this day no biography is complete without a passing reference to the parental causes of the subject’s quirks. (“It is probable that this wrenching separation from his mother was one of the prime sources of his mental instability,” says a recent author, referring to Isaac Newton.)

To be fair, socialization theory was more than an assumption. It did produce evidence, reams of it, all showing that children end up like their parents. Abusive parents produce abusive children, neurotic parents produce neurotic children, phlegmatic parents produce phlegmatic children, bookish parents produce bookish children, and so on.

All this proves precisely nothing, said Harris. Of course, children resemble their parents: they share many of the same genes. Once the studies of twins raised apart started coming out, proving dramatically high heritability for personality, you could no longer ignore the possibility that parents had put their children’s character in place at the moment of conception, not during the long years of childhood. The similarity between parents and children could be nature, not nurture. Indeed, given that the twin studies could find almost no effect of shared environment on personality, the genetic hypothesis should actually be the null hypothesis: the burden of proof was on nurture. If a socialization study did not control for genes, it proved nothing at all. Yet socialization researchers went on year after year publishing these correlations without even paying lip service to the alternative genetic theory.

It was true that socialization theorists used another argument as well: that different parenting styles coincide with different children’s personalities. A calm home contains happy children; children who are hugged a lot are nice; children who are beaten a lot are hostile; and so on. But this could be confusing cause and effect. You could just as plausibly argue that happy children make a calm home; children who are nice get hugged a lot; children who are hostile get beaten a lot. Old joke: Johnny comes from a broken home; I’m not surprised—Johnny could break any home. Sociologists are fond of saying that a good relationship with parents “has a protective effect” in keeping children off drugs. They are much less fond of saying that kids who do drugs do not get on with their parents.

The correlation of good parenting with certain personalities is worthless as proof that parents shape personality, because correlation cannot distinguish cause from effect. According to Harris, it is patent that socialization is not something parents do to children; it is something children do to themselves. There is increasing evidence that what socialization theorists have assumed were parent-to-child effects are often actually child-to-parent effects. Parents treat their children very differently according to the personalities of the children.

Nowhere is this more obvious than in the troubled matter of gender. Parents who have children of different sexes will know that they treat these children differently. Such parents do not have to be told about the experiments in which adults rough-and-tumbled baby girls disguised in blue and cuddled baby boys disguised in pink. But most such parents will also hotly protest that the chief reason they treat their boys differently from their girls is because the boys and girls are different. They fill the boy’s cupboard with dinosaurs and swords, and the girl’s with dolls and dresses, because they know this is the way to please each child. That is what the children keep asking for when in a shop. Parents may reinforce nature with nurture, but they do not create the difference. They do not force gender stereotypes down unwilling throats; they react to preexisting prejudices. Those prejudices are not in one sense innate—there is no “doll gene”—but dolls and many other toys are designed to appeal to predisposing prejudices, just as food is designed to appeal to human tastes. Besides, the parental reaction itself is just as likely to be innate: parents could be genetically predisposed to perpetuate rather than fight gender stereotypes.

Once again, evidence for nurture is not evidence against nature, nor is the converse true. I just listened to a radio program about whether boys were better at soccer than girls or whether their parents just pushed them that way. The proponents of each view seemed to agree implicitly that their explanations were mutually exclusive. Nobody even suggested that both could be true at the same time.

Criminal parents produce criminal children—yes, but not if they adopt the children. In a large study in Denmark, being adopted from an honest family into an honest family produced a child with a 13.5 percent probability of getting into trouble with the law; that figure increased only marginally, to 14.7 percent, if the adopting family included criminals. Being adopted from criminal parents to an honest family, however, caused the probability to jump to 20 percent. Where both adopting and biological parents were criminals, the rate was even higher—24.5 percent. Genetic factors are predisposing the way people react to “crimogenic” environments.

Likewise, the children of divorced parents are more likely to divorce—yes, but only if they are biological children. Children whose adoptive parents divorce show no such tendency to follow suit. Twin studies reveal no role at all for the family environment in divorce. A fraternal twin has a 30 percent probability of getting divorced if his or her twin gets divorced, about the same correlation as with a parent. An identical twin has a 45 percent probability of divorce if his twin gets divorced. About half your probability of divorce is in the genes; the rest is circumstance.

Rarely has an emperor seemed so naked as after Harris was finished with socialization theory. None of this will come as a surprise to people who have more than one child. Parenting is a revelation to most people. Having assumed you would now be the chief coach and sculptor of a human personality, you find yourself reduced to the role of little more than a helpless spectator cum chauffeur. Children compartmentalize their lives. Learning is not a backpack they carry from one environment to another; it is specific to the context. This is not a license for parents to make their children unhappy—making another person suffer is wrong, whether it alters the person’s personality or not. In the words of Sandra Scarr, the veteran champion of the idea that people pick the environments to suit their characters, “Parents’ most important job, therefore, is to provide support and opportunities, not to try to shape children’s enduring characteristics.” Truly terrible parenting can still warp somebody’s personality. But it seems likely that (I repeat) parenting is like vitamin C; as long as it is adequate, a little bit more or less has no discernible long-term effect.

Harris got brickbats as well as bouquets. In a long response, the authors of which included the doyenne of socialization theory, Eleanor Maccoby, her critics surveyed studies supporting the notion that parents do after all affect personality. They conceded that early socialization theorists had exaggerated parental determinism, that twin studies needed to be considered, and that a parent’s behavior is caused as much by the child’s behavior as vice versa. They emphasized that a criminal personality, even if partly genetic, is much more likely to be expressed in a criminal environment. And they drew attention to a series of studies demonstrating how drastically bad parenting could permanently affect a child. Romanian orphans adopted after the age of six months, for example, retain high levels of the stress hormone cortisol throughout their lives.

They also drew attention to the work of Stephen Suomi on rhesus monkeys. Suomi was a student of Harry Harlow who went on to build his own monkey laboratory at the National Institutes of Health in Maryland to continue Harlow’s investigation of mother love. Suomi first selectively bred monkeys to be high-strung. He then cross-fostered young monkeys to adoptive mothers for the first six months of their lives and studied their temperament and social life. A genetically nervous baby reared by a genetically nervous foster mother turned into a socially incompetent adult, vulnerable to stress and itself a bad parent. But the same genetically jittery infant reared by a calm foster mother—a “supermom”—became quite normal, even rather good at rising to the top of the social hierarchy by making friends (sorry: “recruiting social support”) and evading stress. Despite its genetically nervous nature, such a monkey could become a calm and competent mother. Mothering style, in other words, is copied from the parent rather than inherited.

Suomi’s colleagues have since gone on to study the serotonin transporter gene in monkeys. One version of the gene produces a powerful and long-lasting reaction to maternal deprivation, whereas the other version of the gene is immune to maternal deprivation. Since this gene also varies in human beings and the variation correlates with personality differences, this is a big finding. Translated into human terms it would imply that some children can be virtually orphaned and are none the worse for it; others need to be very well nurtured by their parents to turn out normal—the difference lies in the genes. Did we ever expect anything else?

By citing Suomi’s studies, Harris’s critics show that they have already taken her lessons to heart: they are looking for how parents react to a child’s innate personality and how parents respond to genes. In their own words, they no longer see parents as “molding or determining” children. It is the nurturists who are calling for moderation now. Gone is the triumphalism of Freud, Skinner, and Watson. (Remember this? “Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I’ll guarantee to take any one of them at random and train him to become any type of specialist I might select—doctor, lawyer, artist, merchant-chief, and yes, even beggar-man and thief, regardless of his talents, penchants, tendencies, abilities, vocations and race of his ancestors.”)

Moral: Being a good parent still matters.


Harris’s demolition of parental determinism is accompanied by the construction of an alternative theory. She believes that the environment, as well as the genome, has an enormous influence on the personality of a child, but mainly through the child’s peer group. Children do not see themselves as apprentice adults. They are trying to be good at being children, which means finding a niche within groups of peers—conforming, but also differentiating themselves; competing, but also collaborating. They get their language and their accents largely from their peers, not their parents. Harris, like the anthropologist Sarah Hrdy, believes that ancestral human beings reared their children in groups, with women engaged in what zoologists call cooperative breeding. The natural habitat of the child was therefore a mixed nursery of children of all ages—almost certainly self-segregated by sex for much of the time. It is here, not in the nuclear family or the relation with parents, that we should look for the environmental causes of personality.

Most people think of peer pressure as pushing the young toward conformity. Seen from the balcony of middle age, teenagers seem obsessed with uniformity. Whether it be baggy many-pocketed trousers, giant sneakers, bare midriffs, or baseball caps worn backward, teenagers prostrate themselves before the tyrant of fashion in the most craven way. Eccentrics are mocked; nonconformists are ostracized. The code must be obeyed.

Conformity is indeed a feature of human society, at all ages. The more rivalry there is between groups, the more people will conform to the norms of their own group. But there is something else going on beneath the surface. Under the superficial conformity in tribal costumes lies an almost frantic search for individual differentiation. Examine any group of young people, and you will find each playing a consistently different role: a tough, a wit, a brain, a leader, a schemer, a beauty. These roles are created, of course, by nature via nurture. Each child soon realizes what he or she is good at and bad at—compared with the others in the group. The child then trains for that role and not for others, acting in character, developing still further the talent he has and neglecting the talent that is lacking. The tough gets tougher, the wit gets funnier, and so on. When a child specializes in a chosen role, that role becomes what he is good at. According to Harris this tendency to differentiate first emerges at about the age of eight. Until that point, if a group of children asked “Who is the toughest boy here?” all will jump up crying “Me!” After that age, they will start to say “Him.”

This is true within families as well as in school classes and street gangs. The evolutionary psychologist Frank Sulloway sees each child within the family as selecting a vacant niche. If the eldest child is responsible and cautious, the second child will often become rebellious and carefree. Small differences in innate character are exaggerated by practice, not ironed out. This happens even among identical twins. If one twin is more extroverted than the other, they will gradually exaggerate this difference. Indeed, with regard to extroversion psychologists find less correlation between fraternal twins than between siblings of different ages: the very closeness in age causes these twins to exaggerate their differences in personality. They are less alike than they would be if they were two years apart. This is also true of other measures of personality, and it seems to indicate a tendency for human beings to differentiate themselves from their closest companions by building on their innate propensities. If others are practical, then it pays to be cerebral.

I call this the Asterix theory of human personality. In Goscinny and Uderzo’s cartoons about a defiant Gaulish village resisting the might of the Roman empire, there is a very neatly drawn division of labor. The village contains a strong man (Obelix), a chief (Vitalstatistix), a druid (Getafix), a bard (Cacophonix), a blacksmith (Fulliautomatix), a fishmonger (Unhygienix), and a man with bright ideas (Asterix). The harmony of the village owes something to the fact that each man respects the others’ talents—with the exception of Cacophonix, the bard, whose songs are universally dreaded.

The first person to draw attention to this human tendency to specialize was probably Plato, but it was the economist Adam Smith who put the idea into circulation, and it was upon this observation that Smith built his theory of the division of labor—that the secret of human economic productivity is to divide labor among specialists and exchange the results. Smith thought that human beings were unusual among animals in this respect. Other animals are generalists doing everything for themselves. Though rabbits live in social groups, there is no specialization of function among them. No human being is truly a jack-of-all-trades in the same way. Said Smith:

In almost every other race of animals, each individual, when it is grown up to maturity, is entirely independent, and in its natural state has occasion for the assistance of no other living creature…. Each animal is still obliged to support and defend itself, separately and independently, and derives no sort of advantage from that variety of talents with which nature has distinguished its fellows.

But as Smith quickly went on to point out, specialization is useless without exchange.

Man has almost constant occasion for the help of his brethren, and it is in vain for him to expect it from their benevolence only. He will be more likely to prevail if he can interest their self-love in his favour, and show them that it is for their own advantage to do for him what he requires of them…. It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard to their own interest. We address ourselves not to their humanity, but to their self-love, and never talk to them of our own necessities but of their own advantages. Nobody but a beggar chooses to depend chiefly upon the benevolence of his fellow-citizens.

In this, Smith was supported by Émile Durkheim, who considered the division of labor not just the source of social harmony but the foundation of the moral order as well:

But if the division of labour produces solidarity, it is not only because it makes each individual an exchangist, as the economists say; it is because it creates among men an entire system of rights and duties which link them together in a durable way.

I am intrigued by a coincidence: human adults are specialists, and human adolescents seem to have a natural tendency to differentiate themselves. Could it be that these two facts are connected? In Smith’s world, your adult specialty is a matter of chance and opportunity. You inherit the family business, perhaps, or you answer a want ad. You may be lucky and find a job that suits your temperament and talent, but most people just accept that they must learn to do the job they have. The role they played in an adolescent gang—as clown, raconteur, leader, tough—is long forgotten. Butchers, bakers, and candlestick makers are made, not born. Or as Smith put it, “The difference between the most dissimilar characters, between a philosopher and a street porter, for example, seems to arise not so much from nature, as from habit, custom and education.”

But human minds were designed for the Pleistocene savanna, not the urban jungle. And in that much more egalitarian world, where the same opportunities were open to all, talent may have determined your job. Imagine a band of hunter-gatherers. In the gang of youngsters playing around the camp fire are four adolescents. Og has just begun to notice that he has leadership qualities—he seems to be respected when he suggests a new game. Iz, on the other hand, has noticed that she can make the others laugh when she tells a story. Ob is hopeless with words, but when it comes to making a bark-strip net to catch rabbits he seems to have a natural talent. Ik, by contrast, is already a superb naturalist and the others are beginning to trust her to identify plants and animals. Over the next few years, each individual reinforces nature with nurture, specializing in one peculiar talent until it becomes a self-fulfilling prophecy. By the time they reach adulthood, Og no longer relies on natural talent for leadership; he has learned it as a trade. Iz has practiced the role of tribal bard so well it is second nature. Ob is even worse at making conversation, but he can now craft almost any tool. And Ik is a guru of lore and science.

The original genetic differences in talent may be very slight indeed. Practice has done the rest. But that practice may itself depend upon a sort of instinct. It is, I suggest, an instinct peculiar to human beings, deposited in the adolescent human brain by natural selection over tens of thousands of years, and it simply whispers in the ear of the juvenile: Enjoy doing what you are good at; dislike doing what you are bad at. Children seem to have this rule firmly in mind at all times. I am suggesting that the appetite for nurturing a talent might itself be an instinct. Having certain genes gives you certain appetites; finding yourself better at something than your peers sharpens your appetite for that thing; practice makes perfect, and soon you have carved yourself a niche within the tribe as a specialist. Nurture reinforces nature.

Is musical or athletic ability nature or nurture? It is both, of course. Endless hours of practice are what it takes to play tennis or the violin well, but the people who have an appetite for endless hours of practice are the ones with a slight aptitude and an appetite for practice. I recently had a conversation with the parents of a tennis prodigy. Had she always been good at tennis? Not especially, but she was always eager to play, determined to join her elder siblings and badgering her parents for tennis lessons.

Moral: Individuality is a product of aptitude reinforced by appetite.


As the last candidate left the room, the chairman of the committee cleared his throat.

“Well, esteemed colleagues, we must choose one of those three people for the job of financial controller of the company: which is it to be?”

“Easy,” said the red-haired woman. “The first one.”


“Because she is a qualified woman and this company needs more women.”

“Nonsense,” said the portly man. “The best candidate was the second one. He has the best education. You can’t beat Harvard’s business school. Besides I knew his father at college. And he goes to church.”

“Pah,” scoffed the young woman with the thick glasses, “When I asked him what seven times eight is, he said 54! And he kept missing the point of my questions. What use is a good education if you haven’t got a brain? I think the last candidate was by far the best. He was smooth, articulate, open, and quick. He didn’t go to college, true, but he’s got a natural grasp of numbers. Besides, he’s got a real personality and the chemistry’s right.”

“Maybe,” said the chairman. “But he’s black.”

Question: Who in this scene is guilty of genetic discrimination? The chairman, the red-haired woman, the portly man, or the woman with glasses? Answer: All except the portly man. Only he is prepared to discriminate on the grounds of nurture. He is a true blank-slater, believing firmly that all human beings are born equal and stamped with their character by their upbringing. He is prepared to put his faith in the church, Harvard, and his college friend to create the right character whatever the raw material. The chairman’s racism is based on the genetics of skin color. The red-haired woman’s adherence to affirmative action for women is discrimination against people with Y chromosomes. The young woman in the glasses prefers to ignore qualifications and look for intrinsic talent and personality. Her discrimination is more subtle, but it is certainly genetic, at least in part: personality is strongly inherited, and her dismissal of the candidate from Harvard is based on the fact that his “nurture genes” have failed to take advantage of his education. She does not believe he is redeemable. I suggest that she is just as much of a genetic determinist as the chairman and the red-haired woman—and of course I hope her candidate got the job.

Every job interview is about genetic discrimination. Even if the interviewers correctly ignore race, sex, disability, and physical appearance and discriminate on the grounds of ability alone, they are still discriminating, and unless they are prepared to decide on the basis of qualifications and background alone—in which case, why hold an interview?—then they are looking for some intrinsic, rather than acquired, talent. The more they are prepared to make allowances for a deprived background, the more they are genetic determinists. Besides, the other point of the interview is to take into consideration personality, and remember the lesson of twin studies: personality is even more strongly heritable in this society than intelligence.

Do not misunderstand me. I am not saying that it is wrong to interview people to try to ascertain their personality and their innate ability. Nor am I saying that it is right to discriminate on the grounds of race or genetic disability. Some forms of genetic discrimination are clearly more acceptable than others: personality is fine; race is not fine. I am saying that if you want to live in a meritocracy, then you had better not believe in nurture alone, or you will give all the top jobs to those who went to the top schools. Meritocracy means that universities and employers should select the best candidate despite—not because of—his or her background. And that means they must believe in inherited factors of mind.

Consider the question of beauty. You do not need a scientific study to tell you that some people are born more beautiful than others. Beauty runs in families; it depends on face shape, figure, nose size, and so on: all features that are mostly genetic. Beauty is nature. But it is also nurture. Diet, exercise, hygiene, and accidents can all affect physical attractiveness, as can a haircut, makeup, or cosmetic surgery. With plenty of money, luxury, and help, even quite ugly people can make themselves attractive, as Hollywood proves regularly, and even beautiful people can ruin their looks through poverty, carelessness, and stress. Some aspects of beauty, notably thinness and fatness, show considerable cultural plasticity. In poor countries—and in the West during the past, when it was poorer than it is now—to be plump was to be beautiful and to be skinny was to be ugly; today, in the West, those statements have been at least partly reversed. Other aspects of beauty are less variable. If people from different cultures are asked to judge the beauty of women from photographs of the women’s faces, a surprising degree of consensus emerges: Americans pick the same Chinese faces as Chinese people do; and Chinese pick the same American faces as Americans do.

Yet it would be absurd even to ask which aspects of beauty were nature and which nurture. Which bits of Britney Spears are genetically attractive and which are cosmetically attractive? That is a meaningless question, precisely because her nurture has enhanced rather than opposed her nature: her hairdresser has enhanced her hair, but it probably started out as quite nice hair. It is a fair bet, too, that her hair will be less attractive when she is 80 than when she is 20, owing to—well, owing to what? I was about to write some cliché like the ravages of the environment, and then I recalled that aging is a largely genetic process, a process mediated by genes in the same way that learning is. The age-related decay of beauty that occurs in everybody after reaching adulthood is a process of nature via nurture.

Ironically, the more egalitarian a society is, the more innate factors will matter. In a world where everybody gets the same food, the heritability of height and weight will be high; in a world where some live in luxury and others starve, the heritability of weight will be low. Likewise, in a world where everybody gets the same education, the best jobs will go to those with the most native talent. That’s what the word “meritocracy” means.

Is the world more fair when all bright kids, even those from the slums, get places at the best universities, and so get the best jobs? Is that fair to the stupid ones who are left behind? The message of the notorious book The Bell Curve was exactly this: that a meritocracy is not fair. Society stratified by wealth is unfair, because the rich can buy comforts and privileges. But society stratified by intelligence is also unfair, because the clever can buy comforts and privileges. Fortunately, the meritocracy is continually undermined by another, even more human force: lust. If clever men get to the top, it is a reasonable bet that they will use their privileges to seek out pretty women (and probably vice versa), just as the rich did before them. Pretty women are not necessarily stupid, but nor are they necessarily brilliant. Beauty will put a brake on stratification by brains.

Moral: Egalitarians should emphasize nature; snobs should emphasize nurture.


Seen from outside the species, human races look remarkably similar. To a chimpanzee or a Martian, the different human ethnic groups would barely deserve classification as separate races at all. There are no sharp geographical boundaries where one race begins and another ends, and the genetic variation between races is small compared with the genetic variation among individuals of the same race, reflecting the recent common ancestor of all human beings alive today—little more than 3,000 generations have passed since that common ancestor lived.

But seen from inside one race, other human races look extremely different. White Victorians were ready to elevate (or relegate) Africans to a different species, and even in the twentieth century hereditarians frequently sought to prove that the differences between blacks and whites were deeper than skin and were manifest in the mind as well as the body. In 1972 Richard Lewontin disposed of most serious scientific racism by showing that genetic differences between individuals swamp differences between races. Though a few cranks still believe they will find a justification for racial prejudice in the genes, the truth is that science has done far more to explode than to foster the myth of racial stereotypes.

Yet racism has if anything moved up the political agenda even as racial prejudice and scientific justifications for it have faded. By the end of the twentieth century, sociologists were gingerly hinting at a new and disturbing idea—that however unjustified the science of race might be, racism itself might be in the genes. There might be an inevitable human tendency to be prejudiced against people of a different ethnic origin. Racism might be an instinct.

Ask Americans to describe another person they have only briefly met, and they will mention many features, perhaps including body weight, personality, or hobbies. But three salient features will almost certainly be mentioned: age, sex, and race. “My new neighbor is a young white woman.” It is almost as if race is one of the human mind’s natural classifiers. The depressing conclusion is that if people are so naturally race-conscious, then maybe they are naturally racist.

John Tooby and Leda Cosmides have refused to believe this. As the founders of evolutionary psychology, they are apt to think in terms of how instincts got started. They reason that during the African Stone Age race would have been useless as an identifier, because most people would never have met anyone of a different race. Noticing people’s sex and age, on the other hand, would make good sense: these were reliable if approximate predictors of behavior. So evolutionary pressures may well have built into the human mind an instinct—suitably transacted through nurture, of course—to notice sex and age, but not race. To Tooby and Cosmides, it was a puzzle that race should therefore keep appearing as a natural classifier.

Perhaps, they then reasoned, race is merely a proxy for something else. In the Stone Age—and before—one vital thing to know about a stranger is “Whose side is he on?” Human society, like ape society, is riddled with factions—from tribes and bands to temporary coalitions of friends. Perhaps race is a proxy for membership in coalitions. In other words, in modern America, people pay so much attention to race because they instinctively identify people of other races as being members of other tribes or coalitions.

Tooby and Cosmides asked their colleague Robert Kurzban to test this evolutionary theory by a simple experiment. The subjects sat down at a computer and were shown a series of pictures each associated with a sentence putatively spoken by the person in the picture. At the end, they saw all eight pictures and all eight sentences, and they had to match each statement to the right picture. If the subjects matched everything correctly, Kurzban got no data: he was interested only in their mistakes. The mistakes told him something about how the subjects had mentally classified people. For example, age, sex, and race were, as expected, strong clues: the subjects would attribute a statement made by one old person to another old person, or a statement made by one black person to another black person.

Now Kurzban introduced another possible classifier: coalition membership. This was revealed purely through the statements made by the people depicted, who were taking two sides of an argument. Quickly the subjects began to confuse two members of the same side more often than two members of different sides. Astonishingly, this largely replaced the tendency to make mistakes by race, though it had virtually no effect on the tendency to make mistakes by sex. Within four minutes, the evolutionary psychologists had done what social science had failed to do in decades: make people ignore race. The way to do it is to give them another, stronger clue to coalition membership. Sports fans are well aware of the phenomenon: white fans cheer a black player on “their” team as he beats a white player on the opposing team.

This study has immense implications for social policy. It suggests that categorizing individuals by race is not inevitable, that racism can be easily defeated if coalition clues cut across races, and that there is nothing intractable about racist attitudes. It also suggests that the more people of different races seem to act or be treated as members of a rival coalition, the more racist instincts they risk evoking. On the other hand, it suggests that sexism is a harder nut to crack because people will continue to stereotype men as men and women as women, even when they also see them as colleagues or friends.

Moral: The more we understand both our genes and our instincts, the less inevitable they seem.


I would hate to leave the reader feeling too comfortable. The discovery and dissection of genetic individuality will not make the life of politicians easier. Ignorance was once bliss; now they look back nostalgically to the time when they could treat everybody the same. In 2002 that innocence was lost with the publication of an extraordinary study of 400 young men.

These men were all born in 1972–1973 in the city of Dunedin, on the South Island of New Zealand. Those born in that place and at that time were selected to be studied at regular intervals as they grew to adulthood. Of the 1,037 people in the cohort, Terrie Moffitt and Avshalom Caspi selected 442 boys who had four white grandparents. These children—all white and with little variation in class or wealth—included 8 percent who were severely maltreated between the ages of 3 and 11 and 28 percent who were probably maltreated in some way. As expected, many of the maltreated children have themselves turned out violent or criminal, getting into trouble at school or with the law and showing antisocial and violent dispositions. The way to look at this in terms of nature versus nurture would be to see whether the outcome was because of the abusive treatment the subjects received from their parents or because of the genes they received. But Moffitt and Caspi were interested in a different approach: nature via nurture. They tested the male children for differences in one particular gene called monoamine oxidase A, or MAOA, and then compared it with upbringing.

Upstream of the MAOA gene lies a promoter with a 30-letter phrase repeated 3, 3½, 4, or 5 times. Genes with the 3-repetition and 5-repetition versions are much less active than those with 3½ or 4. So Moffitt and Caspi divided the young men into those with high-activity MAOA genes and those with low-activity MAOA genes. Remarkably, the men with high-active MAOA genes were virtually immune to the effect of maltreatment. They did not get into trouble much even if they had been maltreated as youngsters. Those with the low-active genes were much more antisocial if maltreated, and if anything slightly less antisocial than the average if not maltreated. The low-active maltreated men committed four times their share of rapes, robberies, and assaults.

In other words, it seems that it is not enough to experience maltreatment; you must also have the low-active gene—or it is not enough to have the low-active gene; you must also be maltreated. The involvement of the MAOA gene comes as no great surprise. Knocking the gene out in a mouse causes aggressive behavior, and restoring the gene reduces aggression. In a large Dutch family with a history of criminality over several generations, the MAOA gene was found to be broken altogether in the criminal family members but not in their law-abiding relatives. However, this mutation is very rare and cannot explain much crime. The low-active, nurture-dependent mutations are much commoner (being found in about 37 percent of men).

The MAOA gene is on the X chromosome, of which males have only one copy. Women, having two copies, are correspondingly less vulnerable to the effect of the low-active gene, because most of them possess at least one version of the high-active gene as well. But 12 percent of the girls in the New Zealand cohort did have two low-active genes, and these girls were significantly more likely to be diagnosed with conduct disorder as adolescents—if they had been maltreated as youngsters.

Moffitt points out that reducing child abuse is a worthy goal whether it affects adult personality or not, so she sees no implications for policy in this work. But it does not take much to imagine results like this opening the door to better intervention in the lives of troubled youngsters. The study makes clear that a “bad” genotype is not a sentence; for ill effects to occur, a bad environment is also required. Likewise, a “bad” environment is not a sentence; it also requires a “bad” genotype if it is to produce ill effects. For most people, the finding is therefore liberating. But for a few it seems to slam the prison door of fate. Imagine that you are a youngster rescued too late by social services from an abusive family. Just one diagnostic test, of the promoter length in this one gene, will allow a physician to predict, with some confidence, whether you are likely to be antisocial and probably criminal. How will you, your doctor, your social worker, and your elected representative handle this knowledge? The chances are that talk therapy would be useless, but that a drug to alter your mental neurochemistry would be useful: many drugs for mental conditions alter monoamine oxidase activity. But the drug could be risky, or it might fail altogether. Politicians are going to have to decide who should have the power to authorize such a test and such a treatment, in the interests not only of the individual but of his or her potential victims. Now that science knows the connection between gene and environment, ignorance is no longer morally neutral. Is it more moral to insist that all vulnerable people take such a test, to save them from future imprisonment, or that nobody be offered such a test? Welcome to the first of many Promethean dilemmas for the new century. Moffitt has already found another example of a genetic mutation in the serotonin system that responds to environmental factors. Watch this space.

Moral: Social policy must adapt to a world in which everybody is different.


When William James brought his considerable brain power to bear on the problem of free will in the 1880s, it was already a venerable conundrum. For all the efforts of Spinoza, Descartes, Hume, Kant, Mill, and Darwin, he insisted that some juice still remained to be pressed from the free will controversy. Yet even James was lamely reduced to the following disclaimer:

I thus disclaim openly on the threshold all pretension to prove to you that the freedom of the will is true. The most I hope is to induce some of you to follow my example in assuming it is true.

More than a century later, the same statement still applies. For all the efforts of philosophers to impress upon the world that free will is neither an illusion nor an impossibility, the man and woman in the street are to all intents and purposes stuck where they were before. They can see the conundrum easily enough, but they cannot see the solution. To the extent that science posits a cause of someone’s behavior, it seems inevitably to take away freedom of self-expression. Yet we feel we are free to choose our next act, in which case our behavior is unpredictable. The behavior is not random, though, so it must have a cause. And if behavior has a cause, then it is not free. As a practical matter, philosophers have failed to solve this problem in a way they can explain to the ordinary mortal. Spinoza said that the only difference between a human being and a stone rolling down a hill is that the human being thinks he is in charge of his own destiny. Some help. Kant thought it inevitable that pure reason entangles itself in insoluble contradictions when trying to understand causality, and that escape lies in positing two different worlds, one run by the laws of nature and the other by intelligible agents. Locke said that it was as nonsensical to ask “whether a man’s will be free as to ask whether his sleep be swift or his virtue square.” Hume said that either our actions are determined, in which case there is nothing we can do about them; or our actions are random, in which case there is nothing we can do about them. Are we clear yet?

I hope I have done enough in this book to convince you that appealing to nurture is no way out of this dilemma. If personality is created by parents, peers, or society at large, then it is still determined; it is not free. The philosopher Henrik Walter points out that an animal determined 99 percent by genes and 1 percent by its own agency has more free will than one determined 1 percent by genes and 99 percent by nurture. I hope, too, I have done enough to convince you that nature, in the shape of genes which influence behavior, is no special or peculiar threat to free will. In some ways the news that our genes are important contributors to our personality should be reassuring: the imperviousness of individual human nature to outside influences provides a bulwark against brainwashing. At least we are determined by our own intrinsic forces rather than somebody else’s. As Isaiah Berlin put it almost in the form of a catechism:

I wish my life and my decisions to depend on myself, not on external forces of whatever kind. I wish to be the instrument of my own, not of other men’s acts of will. I wish to be a subject, not an object.

Incidentally, it is much bruited about that the discovery of genes influencing behavior will lead to an epidemic of lawyers to try to excuse their clients on the ground that it was their genetic fate to commit crimes, not their choice. It was not his fault, your honor, it is in his genes. In practice, this defense has been tried in very few cases so far, and though it is bound to increase in frequency, I see no earth-shattering revolution in criminal justice if it does. For a start, the courts are already used to deterministic excuses. Lawyers often argue for diminished responsibility on the grounds of insanity, or on the grounds that the defendant was driven to crime by a spouse, or on the grounds that the defendant was abused as a child and therefore could not help himself or herself. Hamlet used the insanity defense in explaining to Laertes why he had killed his father, Polonius:

What I have done,
That might your nature, honour and exception
Roughly awake, I here proclaim was madness.
Was’t Hamlet wrong’d Laertes? Never Hamlet:
If Hamlet from himself be ta’en away,
And when he’s not himself does wrong Laertes,
Then Hamlet does it not, Hamlet denies it.
Who does it, then? His madness: if’t be so,
Hamlet is of the faction that is wrong’d;
His madness is poor Hamlet’s enemy.

Genes will be just another excuse to join the list. Besides, as Steven Pinker has pointed out, excusing criminals on the grounds of diminished responsibility has nothing to do with deciding whether they had free will to choose to behave as they did; it is merely about how to deter them from doing it again. But for me the chief reason the gene defense is still a rarity is that it is a rather useless defense. In trying to disprove his guilt, a criminal who admits to a natural inclination to crime is hardly likely to win over the jury. And when being sentenced, if he claims it is in his nature to murder, he is unlikely to persuade the judge to set him free to kill again. About the only reason for using the gene defense would be to avoid the death penalty after admitting guilt. The first case in which a genetic defense was used was indeed that of a murderer, Stephen Mobley in Atlanta, who was appealing against the death penalty.

I am now going to attempt something much more ambitious: to convince you, as James perhaps could not, that freedom of the will is true—despite nature and despite nurture. This is not to denigrate the great philosophers. Free will was, I believe, a genuinely insoluble problem until recent empirical discoveries, just as the nature of life was a genuinely insoluble problem until the discovery of the structure of DNA. The problem could not have been cracked by thought alone. It is probably still premature to tackle free will until we understand the brain better, but I believe we can now glimpse the beginning of a solution because of our understanding of what genes do in a working brain.

Here goes. My starting point is the work of a visionary Californian neuroscientist with the appropriate name of Walter Freeman. He argues:

The denial of free will, then, comes from viewing a brain as being embedded in a linear causal chain…. Free will and universal determinism are irreconcilable boxes to which linear causality leads.

The key word is “linear,” by which Freeman essentially means one-way. Gravity influences a falling cannonball but not vice versa. Attributing all action to linear causality is a habit to which the human mind is peculiarly addicted. It is the source of many mistakes. I am not so concerned about the mistake of attributing cause where none exists, such as the belief that thunder is Thor hammering, or in the search for blame for an accidental event, or the determinist obsession with horoscopes. My concern here is with another kind of mistake: the belief that intentional behavior must have a linear cause. This is simply an illusion, a mental mirage, a misfiring instinct. It is quite a useful instinct, just as useful as the illusion that a two-dimensional image on a television screen is actually a three-dimensional scene. Natural selection has given the human mind a capacity for detecting intentionality in others, the better to predict their actions. We are fond of the metaphor of cause and effect as a means of understanding volition. But it is an illusion all the same. The cause of behavior lies in a circular, not a linear, system.

This is not to deny volition. The capacity to act intentionally is a real phenomenon, and it can be located in the brain. It lies in the limbic system, as the following simple experiment demonstrates: an animal with any part of its forebrain cut off will lose a specific function. It will be blind, deaf, or paralyzed. But it will still be unmistakably intentional. An animal with its limbic system at the base of the brain excised is still perfectly capable of hearing, seeing, and moving. If fed, it will swallow. But it initiates no action. It has lost its volition.

William James once wrote about lying in bed in the morning telling himself to get up. At first nothing happened; then, without noticing exactly how or when, he found himself getting up. He suspected that consciousness was somehow reporting the effects of the will but was not the will itself. Since the limbic system is, roughly speaking, an unconscious area, this makes good sense. The decision to do something is made by your brain before you are aware of it. Benjamin Libet’s controversial experiments with conscious epileptics seem to support the idea. Libet stimulated the brains of epileptics while they were under a local anesthetic. By stimulating the area of the left brain that receives sensory input from the right hand, he could make the patients consciously perceive a touch to the right hand, but only after half a second’s delay. Then, by stimulating the left hand itself, he could get the same result plus an immediate, unconscious response in the appropriate part of the right brain, which had received its stimulus from the hand by a more direct, faster nerve. Apparently the brain can receive and start acting upon the sensation in real time before the inevitable delay required to process the sensation into awareness. This suggests that volition is unconscious.

For Freeman, the alternative to linear causality is circular causality, in which an effect influences its own cause. This removes the agency from the action, because a circle has no beginning. Imagine a flock of birds twisting and turning as it flies along the seashore. Each bird is an individual taking its own decisions. There is no leader. Yet the birds seem to turn in unison as if linked with one another. What is the cause of each twist and turn? Put yourself in the position of a single bird. You turn left, and this causes your neighbor to bank to the left almost instantaneously. But you turned because your other neighbor turned, and he turned because he thought you were turning before you were. This time the maneuver peters out because all three of you correct your path on seeing what the rest of the flock is doing, but next time perhaps the entire flock may catch the habit and swerve left. The point is that you will search in vain for a linear sequence of cause and effect, because the first cause (your appearing to turn) is then dramatically influenced by the effect (the neighbor’s turning). Causes can still only go forward in time, but they can then influence themselves. Human beings are so obsessed by linear causes that they find it almost impossible to escape the idea. We invent absurd myths, like the flap of a butterfly’s wing starting a hurricane, in a vain attempt to preserve linear causality in such systems.

Freeman is not the only one to champion nonlinear causality as the source of free will. The German philosopher Henrik Walter believes that the full ideal of free will is genuinely an illusion, but that people do possess a lesser form of it, which he calls natural autonomy and which derives from the feedback loops within the brain, where the results of one process become the next starting conditions. Neurons in the brain are hearing back from the recipient even before they have finished sending messages. The response alters the message they send, which in turn alters the response, and so on. This idea is fundamental to many theories of consciousness. Now imagine this in a parallel system with many thousands of neurons communicating at once. You will not get chaos, just as you do not get chaos in the flock of birds, but you will get sudden transitions from one dominant pattern to another. You are lying awake in bed, and the brain is freewheeling from one idea to another in a rather pleasant way. Each idea comes unbidden because of its associations with the preceding idea, as a new pattern of neuronal activity comes to dominate consciousness; then suddenly a sensory pattern intervenes—the alarm clock. Another pattern takes over (I must get up), then another (Maybe a few minutes more). Then before you know it a decision is taken somewhere in the brain and you become aware that you are getting up. This is plainly a volitional act, yet it is in some sense determined by the alarm clock. To try to find the first cause of the actual moment of rising would be impossible, because it is buried in a circular process in which thoughts and experiences feed off each other.

Even the genes themselves are steeped in circular causality. By far the most important discovery of recent years in brain science is that genes are at the mercy of actions as well as vice versa. The CREB genes that run learning and memory are not just the cause of behavior; they are also the consequence. They are cogs responding to experience as mediated through the senses. Their promoters are designed to be switched on and off by events. And what are their products? Transcription factors—devices for switching on the promoters of other genes. Those genes alter the synaptic connections between neurons; this in turn alters the neural circuitry, which in turn alters the expression of the CREB genes by absorbing outside experience, and so on around the circle. This is memory, but other systems in the brain are going to prove to be similarly circular. Senses, memory, and action all influence each other through genetic mechanisms. These genes are not just units of heredity—that description misses the point altogether. They are themselves exquisite mechanisms for translating experience into action.

I cannot pretend I have given a fine-grained description of free will, because I think none can yet exist. It is the sum and product of circular influences with varying networks of neurons, immanent in a circular relationship between genes. In Freeman’s words, “each of us is a source of meaning, a wellspring for the flow of fresh constructions within our brains and bodies.”

There is no “me” inside my brain; there is only an ever-changing set of brain states, a distillation of history, emotion, instinct, experience, and the influence of other people—not to mention chance.

Moral: Free will is entirely compatible with a brain exquisitely prespecified by, and run by, genes.

Previous: 9. The Seven Meanings of “Gene”
Next: Epilogue: Homo stramineus: The Straw Man