Book: The Agile Gene

Previous: 7. Learning Lessons
Next: 9. The Seven Meanings of “Gene”

Some men by the unalterable frame of their constitutions, are stout, others timorous, some confident, others modest, tractable, or obstinate, curious or careless, quick or slow.

John Locke

A child who comes into the world today inherits a set of genes and learns many lessons from experience. But she acquires something else, too: the words, the thoughts, and the tools that were invented by other people far away or long ago. The reason the human species dominates the planet and gorillas are in danger of extinction lies not in our 5 percent of special DNA or in our ability to learn associations, or even in our ability to act culturally, but in our ability to accumulate culture and transmit information, across the seas and across the generations.

The word “culture” means at least two different things. It means high art, discernment, and taste: opera, for instance. It also means ritual, tradition, and ethnicity: such as dancing around a campfire with a bone through your nose. But these two meanings converge: sitting in a black tie listening to La Traviata is merely a western version of dancing around a campfire with a bone through your nose. The first meaning of culture came out of the French Enlightenment. La culture meant civilization—a cosmopolitan measure of progress. The second meaning came out of the German Romantic movement: die Kultur was the peculiar ethnic strain of Germanness that distinguished it from other cultures, the primeval essence of Teutonism. In England, meanwhile, arising out of the evangelical movement and its reaction to Darwinism, culture came to mean the opposite of human nature—the elixir that elevated man above the ape.

Franz Boas, he of the magnificent mustaches in my imaginary photograph, brought the German usage to America and transmuted it into a discipline: cultural anthropology. His influence upon the nature–nurture debate during the ensuing century can hardly be exaggerated. By stressing the plasticity of human culture, he expanded human nature into an infinity of possibilities rather than a prison of constraints. It was he who most forcibly planted the idea that culture is what sets people free from their nature.

Boas’s epiphany came on the shores of Cumberland Sound, a bay on the coast of Baffin Island in the Canadian Arctic. It was January 1884. Boas was 25 years old, and he was mapping the coast to try to understand the migrations and the ecology of the Inuit people. He had recently switched his interest from physics (his thesis was on the color of water) to geography and anthropology. That winter, accompanied by only one European (his servant), he effectively became an Inuit: he lived with the Baffin Islanders in their tents and igloos, ate seal meat, and traveled by dogsled. The experience was a humbling one. Boas began to appreciate not just the technical skills of his hosts but the sophistication of their songs, the richness of their traditions, and the complexity of their customs. He also saw their dignity and stoicism in the face of tragedy: that winter many Inuit died of diphtheria and influenza; their dogs, too, died by the score from a new disease. Boas knew the people blamed him for this epidemic. Not for the last time, an anthropologist would be left wondering if he had brought death to his subjects. As Boas lay in a cramped igloo listening to “the shouting of the Eskimos, the howling of the dogs, the crying of the children,” he confided to his diary: “These are the ‘savages’ whose lives are supposed to be worth nothing compared with a civilized European. I do not believe that we, if living under the same conditions, would be so willing to work or be so cheerful and happy!”

In truth, he was well prepared for the lesson of cultural equality. He was the son of proudly freethinking Jewish parents in the Rhineland town of Minden. His mother, a teacher, steeped him in “the spirit of 1848,” the year of Germany’s failed revolution. At his university he fought a duel to avenge an anti-Semitic slur, and he bore the scars on his face for the rest of his life. “What I want, what I will live and die for, is equal rights for all,” he wrote to his fiancée from Baffin Island. Boas was a fervent adherent of Theodor Waitz, who had argued for the unity of mankind: that all the races of the world descended from a recent common ancestor—a belief that split conservatives. It appealed to readers of Genesis disturbed by Darwin, but not to practitioners of slavery and racial segregation. Boas was also much influenced by the Berlin school of liberal anthropology of Rudolf von Virchow and Adolf Bastian, with its emphasis on cultural as opposed to racial determinism. So it was hardly a surprise when Boas concluded of his Inuit friends that “the mind of the savage is sensible to the beauties of poetry and music, and that it is only to the superficial observer that he appears stupid and unfeeling.”

Boas emigrated to the United States in 1887 and set about laying the foundations of modern anthropology as the study of culture, not race. He wanted to establish that the “mind of primitive man” (the title of his most influential book) was every bit the equal of the mind of civilized man, and at the same time that the cultures of other people were deeply different from each other and from civilized culture. The origin of ethnic differences therefore lay in history, experience, and circumstance, not in physiology or psychology. He first tried to prove that even the shapes of people’s heads changed in the generation after they migrated to the United States:

The east European Hebrew, who has a very round head, becomes long-headed; the south Italian, who in Italy has an exceedingly long head, becomes more short-headed; so that in this country both approach a more uniform style.

If the shape of the head—long a staple of racial taxonomy—was affected by the environment, then “the fundamental traits of mind” could be, too. Unfortunately, a recent reanalysis of Boas’s own data on skull shape suggest that it shows no such thing. Ethnic groups do retain distinct skull shapes even after assimilation into a new country. Boas’s interpretation was influenced by wishful thinking.

Though he stressed the influence of the environment, Boas was no extreme blank-slater. He made the crucial distinction between the individual and the race. It was precisely because he recognized profound innate differences in personality between individuals that he discounted innate differences between races, a perspective that was later proved genetically correct by Richard Lewontin. The genetic differences between two individuals chosen at random from one race are far greater than the average differences between races. Indeed, Boas sounds thoroughly modern in almost every way. His fervent antiracism, his belief that culture determined rather than reflected ethnic idiosyncrasy, and his passion for equality of opportunity for all would come to be hallmarks of political virtue in the second half of the century, although Boas himself was dead by then.

As usual, some of Boas’s followers went too far. They gradually abandoned his belief in individual differences and his recognition of universal features of human nature. They made the usual mistake of equating the truth of one proposition with the falsehood of another. Because culture influenced behavior, innateness could not do so. Margaret Mead was initially the most egregious in this respect. Her studies of the sexual mores of Samoans purported to show how ethnocentric, and therefore “cultural,” was the western practice of premarital celibacy, with the associated inhibitions about sex. In fact, it is now known that she had been duped by a handful of prank-playing young women during her all too brief visit to the island, and that Samoa in the 1920s was if anything slightly more censorious about sex than America. The damage had been done, though, and anthropology, like psychology under Watson and Skinner, became devoted to the blank slate—to the notion that all of human behavior was a product of the social environment alone.

In parallel with Boas’s reformation of anthropology, the same theme was coming to dominate the new science of sociology. Boas’s exact contemporary, and his match in the mustache department, Émile Durkheim, made an even stronger statement of social causation: social phenomena could be explained by social facts alone, not by anything biological. Omnia cultura ex cultura. Durkheim, who was a year older than Boas, was born in Lorraine, just across the French border from Boas’s birthplace, also to Jewish parents. Unlike Boas, however, Durkheim was the son of a rabbi, descended from a long line of rabbis, and his youth was spent in the study of the Talmud. After flirting with Catholicism, he entered the elite École Normale Supérieure in Paris. Whereas Boas would wander around the world, live in igloos, befriend Native Americans, and emigrate, Durkheim did little except study, write, and argue. Aside from a brief period of study in Germany, he remained in the ivory tower of French universities all his life, first in Bordeaux and later in Paris. He is a biographical desert.

Yet Durkheim’s influence upon the nascent school of sociology was immense. It was he who predicated the study of sociology on the notion of the blank slate. The causes of human behavior—from sexual jealousy to mass hysteria—are outside the individual. Social phenomena are real, repeatable, definable, and scientific (Durkheim envied the physicists their hard facts—physics envy is a well-known condition in the softer sciences), but they are not reducible to biology. Human nature is the consequence, not the cause, of social forces.

The general characteristics of human nature participate in the work of elaboration from which social life results. But they are not the cause of it, nor do they give it its special form; they only make it possible. Collective representations, emotions, and tendencies are caused not by certain states of the consciousnesses of individuals but by the conditions in which the social group, in its totality, is placed…. Individual natures are merely the indeterminate material that the social factor molds and transforms.

Boas and Durkheim, with Watson in psychology, represent the zenith of the blank-slate argument for the perfect malleability of human psychology by outside forces. As a negative statement rejecting all innateness, it is an argument that has been so demolished by Steven Pinker in his recent book The Blank Slate as to leave little to say. But as a positive statement of the degree to which human beings are influenced by social factors, it is undeniable. The brick that Durkheim helped Boas put into the wall of human nature was a vital one—the brick called culture. Boas disposed of the notion that all human societies consisted of more or less well trained apprentices aspiring to be English gentlemen, that there was a ladder of stages through which cultures must pass on the way to civilization. In its place, he posited a universal human nature refracted by different traditions into separate cultures. The behavior of a human being owes much to his nature; but it also owes much to the rituals and habits of his fellows. He seems to absorb something from the tribe.

Boas posed, and still poses, a paradox. If human abilities are the same everywhere, and Germans and Inuit have equal minds, then why are cultures diverse at all? Why is there not a single human culture common to Baffinland and the Rhineland? Alternatively, if culture, not nature, is responsible for creating different societies, then how can they be regarded as equal? The very fact of cultural change implies that some cultures can advance more than others, and if culture influences the mind, then some cultures must produce superior minds. Boas’s intellectual descendants, such as Clifford Geertz, have addressed the paradox by asserting that the universals must be trivial; there is no “mind for all cultures,” no common core to the human psyche at all save the obvious senses. Anthropology must concern itself with difference, not similarity.

This answer I find deeply unsatisfying, not least because of its obvious political dangers—without Boas’s conclusion of mental equality, in by the back door comes prejudice. That would be to commit the naturalistic fallacy—deriving morals from facts, or “ought” from “is”—which the GOD forbid. It also commits the fallacy of determinism, ignoring the lessons of chaos theory: set rules need not produce a set result. With the sparse rules of chess, you can produce trillions of different games within just a few moves.

I do not believe Boas ever put it like this, but the logical conclusion from his position is that there is a great contrast between technological advance and mental stasis. Boas’s own culture had steamships, telegraphs, and literature; but it produced no discernible superiority in spirit and sensibility over the illiterate Inuit hunter-gatherers. This was a theme that ran through the work of Boas’s contemporary, the novelist Joseph Conrad. Progress, for Conrad, was a delusion. Human nature never progressed but was doomed to repeat the same atavisms in each generation. There is a universal human nature, retreading the triumphs and disasters of its ancestors. Technology and tradition merely refract this nature into the local culture: bow ties and violins in one place, nasal ornaments and tribal dancing in another. But the bow ties and the dances do not shape the mind—they express it.

When watching a Shakespeare play, I am often struck by the sophistication of his understanding of personality. There is nothing naive or primitive about the way his characters scheme or woo; they are world-weary, jaded, postmodernist, or self-aware. Think of the cynicism of Beatrice, Iago, Edmund, or Jaques. I cannot help thinking, for a split second, that this seems odd. The weapons they fight with are primitive, their methods of travel cumbersome, their plumbing antediluvian. Yet they speak to us of love and despair and anger and betrayal in voices of modern complexity and subtlety. How can this be? Their author had such cultural disadvantages. He had not read Jane Austen or Dostoyevsky; or watched Woody Allen; or seen a Picasso; or listened to Mozart; or heard of relativity; or flown in an airplane; or surfed the Net.

Far from proving the plasticity of human nature, Boas’s very argument for the equality of cultures depends upon accepting an unchanging, universal nature. Culture can determine itself, but it cannot determine human nature. Ironically, it was Margaret Mead who proved this most clearly. To find a society in which young girls were sexually uninhibited, she had to visit a land of the imagination. Like Rousseau before her, she sought something “primitive” about human nature in the South Seas. But there is no primitive human nature. Her failure to discover the cultural determinism of human nature is the dog that failed to bark.

So turn the determinism around and ask why human nature seems to be universally capable of producing culture—of generating cumulative, technological, heritable traditions. Equipped with just snow, dogs, and dead seals, human beings will gradually invent a lifestyle complete with songs and gods as well as sleds and igloos. What is it inside the human brain that enables it to achieve this feat, and when did this talent appear?

Notice, first, that the generation of culture is a social activity. A solitary human mind cannot secrete culture. The precocious Russian anthropologist Lev Semenovich Vygotsky pointed out in the 1920s that to describe an isolated human mind is to miss the point. Human minds are never isolated. More than those of any other species, they swim in a sea called culture. They learn languages, they use technologies, they observe rituals, they share beliefs, they acquire skills. They have a collective as well as an individual experience; they even share collective intentionality. Vygotsky, who died at the age of 38 in 1934 after publishing his ideas only in Russian, remained largely unknown in the West until much later. He has recently become a fashionable figure in educational psychology and some corners of anthropology. For my purposes, however, his most important insight is his insistence on a link between the use of tools and language.

If I am to sustain my argument that genes are at the root of nurture as well as nature, then I must somehow explain how genes make culture possible. Once again, I intend to do so, not by proposing “genes for” cultural practice, but by proposing the existence of genes that respond to the environment—of genes as mechanisms, not causes. This is a tall order, and I may as well admit, right now, that I will fail. I believe that the human capacity for culture comes not from some genes that co-evolved with human culture, but from a fortuitous set of preadaptations that suddenly endowed the human mind with an almost limitless capacity to accumulate and transmit ideas. Those preadaptations are underpinned by genes.


The discovery that human beings are 95 percent chimpanzee at the genetic level exacerbates my problem. In describing the genes involved in learning, instinct, imprinting, and development, I had no difficulty calling on animals as examples, for the difference between human and animal psychology in these respects is a difference of degree. But culture is different. The cultural gap between a human being and even the brightest ape or dolphin is a gulf. Turning an ancestral ape’s brain into a human brain plainly took just a handful of minor adjustments to the recipe: all the same ingredients, just a little longer in the oven. Yet these minor changes had far-reaching consequences: people have nuclear weapons and money, gods and poetry, philosophy and fire. They got all these things through culture, through their ability to accumulate ideas and inventions generation by generation, transmit them to others, and thereby pool the cognitive resources of many individuals alive and dead.

Ordinary modern businesspeople, for instance, could not do without the help of Assyrian phonetic script, Chinese printing, Arabic algebra, Indian numerals, Italian double-entry bookkeeping, Dutch merchant law, Californian integrated circuits, and a host of other inventions spread over continents and centuries. What is it that makes people, and not chimps, capable of this feat of accumulation?

After all, there seems little doubt that chimpanzees are capable of culture. They show strong local traditions in feeding behavior, which are then passed on by social learning. Some populations crack nuts using stones; others use sticks. In west Africa, chimps eat ants by dipping a short stick into an ants’ nest and putting each ant to the mouth one by one; in east Africa, they dip a long stick into an ants’ nest, collect many ants on it, and strip the ants off the stick into the hand and from there to the mouth. There are more than 50 known cultural traditions of this kind across Africa, and each is learned by careful observation by youngsters (adult immigrants to a troop find it harder to learn local customs). These traditions are vital to their lives. Frans de Waal goes so far as to say that “chimps are completely dependent on culture for survival.” Like human beings, they cannot get through life without learned traditions.

Nor are chimpanzees alone in this. The moment when animal culture was first discovered was in September 1953, on the tiny island of Kohima, off the coast of Japan. A young woman named Satsue Mito had for five years been feeding the monkeys on the islet with wheat and sweet potatoes to habituate them to human observers. That month she first saw a young monkey called Imo wash the sand off a sweet potato. Within three months two of Imo’s playmates and her mother had adopted the practice, and within five years most younger monkeys in the troop had joined them. Only the older males failed to take up the custom. Imo soon learned to separate wheat from sand by putting it in water and letting the sand sink.

Culture abounds in large-brained species. Killer whales have traditional, and learned, feeding techniques that are peculiar to each population: beaching themselves to grab sea lions is a speciality of south Atlantic orcas, for instance, and a trick that requires much practice to perfect. So human beings are definitely not unique in being able to pass on traditional customs by social learning. But this only makes the question more baffling. If chimpanzees, monkeys, and orcas have culture, why have they not had a cultural take-off? There is no ferment of continuous, cumulative innovation and change. There is, in a word, no “progress.”

Rephrase the question, then. How did human beings get cultural progress? How did we happen on cumulative culture? This is a question that has elicited a torrent of theoretical speculation in recent years, but very little in the way of empirical data. The scientist who has tried hardest to pin down an answer is Michael Tomasello of Harvard. He has done a long series of experiments on adult chimpanzees and young human beings, from which he concludes that “only human beings understand [other human beings] as intentional agents like the self and so only human beings can engage in cultural learning.” This difference emerges at nine months of age—Tomasello calls it the nine-month revolution. At this point human beings leave apes behind in the development of certain social skills. For instance, human beings will now point at an object for the sole purpose of sharing attention with another person. They will look in the direction somebody points in, and they will follow the gaze of another person. Apes never do this; nor (until much later) do autistic children, who seem to have trouble with understanding that other people are intentional agents with minds of their own. According to Tomasello, no ape or monkey has ever shown the ability to attribute a false belief to another individual, something that comes naturally to most four-year-old human beings. From this, Tomasello infers that human beings, uniquely, can place themselves in others’ mental shoes.

This argument teeters on the brink of the human exceptionalism that so irritated Darwin. Like all such claims, it is vulnerable to the first definitive discovery of an ape that acts on what it believes another ape is thinking. Many primatologists, not least Frans de Waal, feel they have already seen such behavior in the wild and in captivity. Tomasello will have none of it. Other apes can understand social relationships between third parties (something that is probably beyond most mammals) and they can learn by emulation. If shown that turning over a log reveals insects beneath, they will learn that insects can be found beneath logs. But they cannot, says Tomasello, understand the goals of other animals’ behavior. This limits their ability to learn, and in particular it limits their ability to learn by imitation.

I am not sure I buy Tomasello’s full argument. I am influenced by Susan Mineka’s monkeys, which are undoubtedly capable of social learning at least in the narrowly prepared case of fearing snakes. Learning is not a general mechanism; it is specially shaped for each kind of input, and there may be inputs for which learning by imitation is possible even in chimps. And even if Tomasello manages to explain away imitation in the cultural traditions of primates—the monkeys that learned to wash sand off potatoes, the chimps that learn from each other how to crack nuts—he will surely have trouble proving that dolphins cannot think their way into each other’s thoughts. There is undoubtedly something uniquely human about the degree of our ability to empathize and imitate, just as there is something uniquely human about the degree of our ability to communicate symbolically—but it is a difference of degree, not kind.

Nevertheless, a difference of degree can still become a gulf in the context of culture. Grant Tomasello his point that imitation becomes something more profound when the imitator has gotten inside the head of the model—when he or she has a theory of mind. Grant, too, that in some sense miming an idea to oneself creates representation, which in turn can become symbolism. Perhaps that is what enables young human beings to acquire much more culture than chimpanzees do. Imitation therefore becomes the first potential part of what Robin Fox and Lionel Tiger called the culture acquisition device. There are two other promising candidates: language and manual dexterity. And all three seem to come together in one part of the brain.

In July 1991, Giacomo Rizzolatti made a remarkable discovery in his laboratory in Parma. He was recording from single neurons inside the brains of monkeys, trying to work out what causes a neuron to fire. Normally this is done in highly controlled conditions using largely immobile monkeys doing invented tasks. Dissatisfied with these artificial conditions, Rizzolatti wanted to record from monkeys leading almost normal lives. He began with feeding, trying to correlate each action with each neuronal response. He began to suspect that some neurons recorded the goal of the action, not the action itself, but his fellow scientists were dismissive: the evidence was too anecdotal.

So Rizzolatti put his monkeys back in a more controlled apparatus. From time to time each monkey was handed some food, and Rizzolatti and his colleagues noticed that some “motor” neurons seemed to respond to the sight of a person grasping a piece of food. For a long time they thought this was a coincidence and the monkey must be moving at the same time, but one day they were recording from a neuron which fired whenever the experimenter grasped a piece of food in a certain way; the monkey was completely still. The food was then handed to the monkey and as it grasped the food in the same way, once again the neuron fired. “That day I became convinced that the phenomenon was real,” says Rizzolatti, “We were very excited.” These researchers had found a part of the brain that represented both an action and a vision of the action. Rizzolatti called it a “mirror neuron” because of its unusual ability to mirror both perception and motor control. He later found more mirror neurons, each active during the observation and imitation of a highly specific action, such as grasping between finger and thumb. He concluded that this part of the brain could match a perceived hand movement to an achieved hand movement. He believed he was looking at the “evolutionary precursor of the human mechanism for imitation.”

Rizzolatti and his colleagues have since repeated the experiment with human beings in brain scanners. Three bits of the brain lit up when the volunteers both observed and imitated finger movements: again, this was the phenomenon of “mirror” activity. One of those areas was the superior temporal sulcus (STS), which lies in a sensory area concerned with perception. It is no surprise to find a sensory area lighting up when the volunteer observes an action, but it is surprising to find the area active when the volunteer later executes the imitated action. A curiosity of human imitation is that if a person is asked to imitate a right-handed action, he or she will often imitate it with the left hand, and vice versa. (Try telling somebody “There is something on your cheek” and touch your own right cheek at the same time. Chances are, the person will touch her left cheek in response.) Consistent with this, in Rizzolatti’s experiments, the STS was more active when the volunteer imitated a left-handed action with the right hand than when the volunteer imitated a left-handed action with the left hand. Rizzolatti concludes that the STS “perceives” the subject’s own action and matches it to its memory of the observed action.

Recently, Rizzolatti’s team has discovered a still stranger neuron, which fires not only when a certain motion is enacted and observed but also when the same action is heard. For example, the researchers found a neuron that responded to the sight and sound of a peanut being broken open, but not to the sound of tearing paper. The neuron responded to the sound of a breaking peanut alone, but not to the sight alone. Sound is important in telling an animal that it has successfully broken a nut, so this makes sense. But so exquisitely sensitive are these neurons that they can “represent” certain actions from the sounds alone. This is getting remarkably close to finding the neuronal manifestation of a mental representation: the noun phrase “breaking peanut.”

Rizzolatti’s experiments bring us close to describing, albeit in the crudest terms, a neuroscience of culture—a set of tools that between them make up at least part of the culture acquisition device. Will there be found a set of genes underlying the design of this “organ”? In one sense, yes, for the content-specific design of brain circuits is undoubtedly inherited through DNA. The genes’ products may not be unique to this part of the brain; the uniqueness comes in the combination of genes used for the design rather than the genes themselves. This combination will create the capacity to absorb culture. But that is only one interpretation of the phrase “culture genes”; a completely different set of genes from the designing genes will be found at work in everyday life. The axon-guidance genes that built the device will be long silenced. In their place will be genes that operate and modify synapses, secrete and absorb neurotransmitters, and so on. Those will not be a unique set either. But they will in a true sense be the devices that transmit the culture from the outside world into and through the brain. They will be indispensable to the culture itself.

Recently Anthony Monaco and his student Cecilia Lai discovered a genetic mutation apparently responsible for a speech and language disorder. It is the first candidate for a gene that may improve cultural learning through language. “Severe language impairment” has long been known to run in families, to have little to do with general intelligence, and to affect not just the ability to speak, but the ability to generalize grammatical rules in written language and perhaps even to hear or interpret speech as well. When the heritability of this trait was first discovered, it was dubbed the “grammar gene,” much to the fury of those who saw such a description as deterministic. But it now turns out that there is indeed a gene on chromosome 7, responsible for this disorder in one large pedigree and in another, smaller one. The gene is necessary for the development of normal grammatical and speaking ability in human beings, including fine motor control of the larynx. Known as forkhead box P2, or FOXP2, it is a gene whose job is to switch on other genes—a transcription factor. When it is broken, the person never develops full language.

Chimpanzees also have FOXP2; so do monkeys and mice. Therefore, merely possessing the gene does not make speech possible. In fact, the gene is unusually similar in all mammals. Svante Paabo has discovered that in all the thousands of generations of mice, monkeys, orangutans, gorillas, and chimpanzees since they all had a common ancestor, there have been only two changes in the FOXP2 gene that alter its protein product—one in the ancestors of mice and one in the ancestors of orangutans. But perhaps having the peculiar human form of the gene is a prerequisite of speech. In human beings, since the split with chimpanzees (merely yesterday) there have already been two other changes that alter the protein. And ingenious evidence from the paucity of silent mutations suggests that these changes happened very recently and were the subject of a “selective sweep.” This is the technical term for elbowing all other versions of the gene aside in short order. Sometime after 200,000 years ago, a mutant form of FOXP2 appeared in the human race, with one or both of the key changes, and that mutant form was so successful in helping its owner reproduce that his or her descendants now dominate the species to the utter exclusion of all previous versions of the gene.

At least one of the two changes, which substitutes a serine molecule for an arginine at the 325th position (out of 715) in the construction of the protein, almost certainly alters the switching on and off of the gene. It might, for instance, allow the gene to be switched on in a certain part of the brain for the first time. This might, in turn, allow FOXP2 to do something new. Remember that animals seem to evolve by giving the same genes new jobs, rather than by inventing new genes. Admittedly, nobody knows exactly what FOXP2 does, or how it enables language to come into existence, so I am already speculating. It remains possible that rather than FOXP2 allowing people to speak, the invention of speech put pressure on the GOD to mutate FOXP2 for some unknown reason: that the mutation is consequence, not cause.

But since I am already beyond the perimeter of the known world, let me lay out my best guess for how FOXP2 enables people to speak. I suspect that in chimpanzees the gene helps to connect the part of the brain responsible for fine motor control of the hand to various perceptual parts of the brain. In human beings, its extra (or longer?) period of activity allows it to connect to other parts of the brain including the region responsible for motor control of the mouth and larynx.

I think this because there may be a link between FOXP2 and Rizzolatti’s mirror neurons. One of the parts of the brain active in the volunteers during Rizzolatti’s grasping experiment, known as area 44, corresponds to the area where the mirror neurons were found in the monkey brain. This is part of what is sometimes called Broca’s area, and that fact thickens the plot considerably, because Broca’s area is a vital part of the human brain’s “language organ.” In both monkeys and people, this part of the brain is responsible for moving the tongue, mouth, and larynx (which is why a stroke in this area disables speech), but also for moving the hands and fingers. Broca’s area does both speech and gesture.

Herein lies a vital clue to the origin of language itself. A truly extraordinary idea has begun to take shape in the minds of several different scientists in recent years. They are beginning to suspect that human language was originally transmitted by gesture, not speech.

The evidence for this guess comes from many directions. First there is the fact that to produce “calls” monkeys and people both use a completely different part of the brain from that which human beings use to produce language. The vocal repertoire of the average monkey or ape consists of several tens of different noises, some of which express emotions, some of which refer to specific predators, and so on. All are directed by a region of the brain lying near the midline. This same region of the brain directs human exclamations: the scream of terror, the laugh of joy, the gasp of surprise, the involuntary curse. Somebody can be rendered speechless by a stroke in the temporal lobe and still exclaim fluently. Indeed, some aphasics continue to be able to swear with gusto but find arm movements impossible.

Second, the “language organ,” by contrast, sits on the (left) side of the brain, straddling the great rift valley between the temporal and frontal lobes—the Sylvian fissure. This is a motor region, used in monkeys and apes mainly for gesture, grasp, and touch, as well as facial and tongue movements. Most great apes are preferentially right-handed when they make manual gestures, and Broca’s area is consequently larger on the left side of the brain in chimps, bonobos, and gorillas. This asymmetry of the brain—even more marked in human beings—must therefore have predated the invention of language. Instead of the left brain growing larger to accommodate language, it would seem logical that language may have gone left because that was where the dominant gesturing hand was controlled. This is a nice theory, but it fails to explain the following awkward fact. People who learn sign language as adults do indeed use the left hemisphere; but native speakers of sign language use both hemispheres. Left-hemisphere specialization for language is apparently more pronounced in speech than it is in sign language—the opposite of what the gesture theory predicts.

A third hint in favor of the primacy of sign language comes from the human capacity for expressing language through the hands rather than the voice. To a greater or lesser extent people accompany much of their speech with gestures—even people who are speaking on a telephone, and even people who have been blind from birth. The sign language used by deaf people was once thought to be a mere pantomime of gestures mimicking actions. But in 1960 William Stokoe realized that it was a true language: it uses arbitrary signs and it possesses an internal grammar every bit as sophisticated as spoken speech, with syntax, inflection, and all the other accoutrements of language. It possesses other features very similar to spoken languages, such as being learned best during a critical period of youth and acquired in exactly the same constructive way as spoken languages. Indeed, just as a spoken pidgin can be turned into a fully grammatical creole only when learned by a generation of children, the same has proved true of sign languages.

A final proof that speech is just one delivery mechanism for the language organ is that deaf people can become manually “aphasic” when they have strokes affecting the same regions of the brain that would cause aphasia in hearing people.

Then there is the fossil record. The first thing that the ancestors of human beings did when they separated from the ancestors of chimps more than 5 million years ago was stand on two feet. Bipedal locomotion, accompanied by a reorganization of the skeleton, occurred more than a million years before there was any sign of brain enlargement. In other words, our ancestors freed their hands to grasp and gesture long before they started to think or speak any differently from any other ape. One advantage of the gesture theory is that it immediately suggests why human beings developed language and other apes did not. Bipedalism freed the hands not just to carry things, but to talk. The front limbs of most primates are too busy propping up the body to get into conversations.

Robin Dunbar suggests that language took over the role that grooming occupies among apes and monkeys—the maintenance and development of social bonds. Indeed, apes probably use their fine manual dexterity at least as much when seeking ticks in each other’s fur as they do when picking fruit. In primates that live in large social groups, grooming becomes extremely time-consuming. Gelada baboons spend up to 20 percent of their waking hours grooming each other. People started to live in such large groups, Dunbar argues, that it became necessary to invent a form of social grooming which could be done to several people at once: language. Dunbar notes that human beings do not use language just to communicate useful information; they use it principally for social gossip: “Why on earth is so much time devoted by so many to the discussion of so little?”

This idea about grooming and gossip can be given an extra twist: if the first protohumans to use language began to gossip with hand gestures, they would have necessarily neglected their grooming duties. You can’t groom and gossip at the same time if you talk with your hands. I am tempted to suggest that gestural language therefore brought with it a crisis of personal hygiene among our ancestors, which was solved only when they stopped being hairy and started wearing disposable clothes instead. But some waspish reviewer would accuse me of telling just-so stories, so I withdraw the idea.

According to the scanty fossil evidence, speech, unlike manual dexterity, appeared late in human evolution. The neck vertabrae of the 1.6-million-year-old Nariokotome skeleton discovered in 1984 in Kenya have space for only a narrow spinal cord like an ape’s, half the width of a modern human spinal cord. Modern people need a broad cord to supply the many nerves to the chest for close control of breathing during speech. Other, still later skeletons of Homo erectus have a high apelike larynx that might be incompatible with elaborate speech. The attributes of speech appear so late that some anthropologists have been tempted to infer that language was a recent invention, appearing as recently as 70,000 years ago. But language is not the same thing as speech: syntax, grammar, recursion, and inflection may be ancient, but they may have been done with hands, not voice. Perhaps the FOXP2 mutation of less than 200,000 years ago represents not the moment that language itself was invented but the moment that language could be expressed through the mouth as well as through the hands.

By contrast, the peculiar features of the human hand and arm appear early in the fossil record. Lucy, the 3.5-million-year-old Ethiopian, already had a long thumb and altered joints at the base of the fingers and in the wrist, enabling her to grasp objects between thumb, index, and middle finger. She also had an altered shoulder allowing overhand throwing, and her erect pelvis allowed a rapid twist of the body axis. All three of these features are necessary for the human skill of grasping, aiming, and throwing a small rock—something that is beyond the capability of a chimpanzee, whose throwing consists of randomly aimed underhand efforts. In humans, throwing is an extraordinary skill, requiring precision timing in the rotation of several joints and the exact moment of release. Planning such a movement requires more than a small committee of neurons in the brain; it needs coordination between different areas. Perhaps, says the neuroscientist William Calvin, it was this “throwing planner” that found itself suited to the task of producing sequences of gestures ordered by a form of early grammar. This would explain why both sides of the Sylvian fissure, connected by a cable called the arcuate fasciculus, are involved.

Whether it was throwing, toolmaking, or gesture itself that first enabled the perisylvian parts of the brain to become accidentally preadapted for symbolic communication, the hand undoubtedly played its part. As the neurologist Frank Wilson complains, we have too long neglected the human hand as a shaper of the human brain. William Stokoe, a pioneer of the study of sign language, suggested that hand gestures came to represent two distinct categories of word: things by their shape, and actions by their motion, thus inventing the distinction between noun and verb that runs so deeply through all languages. To this day, nouns are found in the temporal lobe, verbs in the frontal lobe across the Sylvian fissure. It was their coming together that transformed a protolanguage of symbols and signs into a true grammatical language. And perhaps it was hands, not the voice, that first brought them together. Only later, perhaps to be able to communicate in the dark, did speech invade grammar. Stokoe died in 2000, shortly after completing a book on the hand theory.

You can quibble about the historical details, and I am no die-hard devotee of the hypothesis about hands and language, but for me the beauty of this story lies in the way it brings imitation, hands, and voice into the same picture. All are essential features of the human capacity for culture. To imitate, to manipulate, and to speak are three things that human beings are peculiarly good at. They are not just central to culture: they are culture. Culture has been called the mediation of action through artifacts. If opera is culture, La Traviata is all about the skillful combination of imitation, voice, and dexterity (in the making as well as the playing of musical instruments). What those three brought into being was a system of symbols, so that the mind could represent within itself, and within social discourse and technology, anything from quantum mechanics to the Mona Lisa or an automobile. But perhaps more important, they brought the thoughts of other minds together: they externalized memory. They enabled us to acquire far more from our social surroundings than we could ever hope to learn for ourselves. The words, tools, and ideas that occurred to somebody far away and long ago can be part of the inheritance of each individual person born today.

Whether the hand theory is right or not, the central role of symbolism in the expansion of the human brain is a proposition many can agree on. Culture itself can be “inherited” and can select for genetic change to suit it. In the words of the three scientists most closely associated with this theory of the coevolution of genes and cultures:

A culture-led process, acting over a long period of human evolutionary history, could easily have led to a fundamental reworking of human psychological dispositions.

The linguist and psychologist Terence Deacon argues that at some point early human beings combined their ability to imitate with their ability to empathize and came up with an ability to represent ideas by arbitrary symbols. This enabled them to refer to ideas, people, and events that were not present and so to develop an increasingly complex culture, which in turn put pressure on them to develop larger and larger brains in order to “inherit” items of that culture through social learning. Culture thereby evolves hand in hand with real genetic evolution.

Susan Blackmore has developed Richard Dawkins’s idea of the meme to turn this process on its head. Dawkins describes evolution as competition between “replicators” (usually genes) for “vehicles” (usually bodies). Good replicators must have three properties: fidelity, fecundity, and longevity. If they do, then competition between them, differential survival, and hence natural selection for progressive improvement are not just likely but inevitable. Blackmore argues that many ideas and units of culture are sufficiently enduring, fecund, and high-fidelity and that they therefore compete to colonize brain space. The words and concepts therefore provide the selection pressure to drive the expansion of the brain. The better a brain was at copying ideas, the better it could cause the body to thrive.

Grammatical language is not the direct result of any biological necessity, but of the way the memes changed the environment of genetic selection by increasing their own fidelity, fecundity and longevity.

The anthropologist Lee Cronk gives a nice example of a meme. Nike, the shoe company, made a television advertisement featuring a group of east African tribesmen wearing Nike hiking boots. At the end of the commercial, one of the men turned to the camera and spoke some words. A subtitle translated them as “Just do it,” Nike’s slogan. Nike’s luck was out, because the ad was seen by Lee Cronk, who speaks the Samburu dialect of Masai. What the man actually said was, “I don’t want these. Give me big shoes.” Cronk’s wife, a journalist, wrote the story, and it soon appeared on the front page of USA Today and in Johnny Carson’s monologue on The Tonight Show. Nike sent Cronk a free pair of boots; when Cronk was next in Africa, he gave them to a tribesman.

This was an everyday cross-cultural prank. It lasted a week in 1989 and was soon forgotten. But when a few years later, the Internet had been developed, Cronk’s story found its way to a Website. From there it spread, minus the date, as if it were a new story, and Cronk now gets perhaps one inquiry a month about it. The moral of the story is that memes need a medium to replicate in. Human society works quite well; the Internet works even better.

As soon as human beings had symbolic communication, the cumulative ratchet of culture could begin to turn: more culture demanded bigger brains; bigger brains allowed more culture.


Yet nothing happened. Shortly after the time of the Nariokotome boy, 1.6 million years ago, there appeared on Earth a magnificent tool: the Acheulean hand ax. It was undoubtedly invented by members of the boy’s species, the unprecedentedly huge-brained Homo ergaster, and it was a great leap forward from the simple, irregular Oldowan tools that preceded it. Two-faced, symmetrical, shaped like a teardrop, sharpened all around, made of flint or quartz, it is a thing of beauty and mystery. Nobody knows for sure if it was used for throwing, cutting, or scraping. It spread north to Europe with the diaspora of Homo erectus, the Coca-Cola of the Stone Age, and its technological hegemony lasted a million years: it was still in use just half a million years ago. If this was a meme, it was spectacularly faithful, fecund, and enduring. Astonishingly, during that time not one of the hundreds of thousands of people alive from Sussex to South Africa seems to have invented a new version. There is no cultural ratchet, no ferment of innovation, no experiment, no rival product, no Pepsi. There is only a million years of hand ax monopoly. The Acheulean Hand Ax Corporation Inc. must have cleaned up. Big time.

Theories of cultural coevolution do not predict this. They demand an acceleration of change once technology and language come together. The creatures that made these axes had brains big enough and hands versatile enough to make these hand axes, and to learn from each other how to do so, yet they did not use their hands or brains to improve the product. Why did they wait more than a million years before suddenly beginning the inexorable, exponential progression of technology from spear-thrower to plow to steam engine to silicon chip?

This is not to denigrate the Acheulean hand ax. Experiments show that it is almost impossible to improve on this ax as a tool for butchering large game, except by inventing steel. It could be perfected only by the careful use of “soft hammers” made of bone. But strangely, its makers seem to have had little pride in their tools, making fresh ones for each kill. In at least one case, at Boxgrove in Sussex, where more than 250 hand axes have been found, it appears that they were laboriously manufactured by at least six right-handed individuals at the site of a dead horse, then discarded nearby almost unused: some of the flakes knocked off in the process of making them showed more wear from butchery than the axes themselves. None of this explains why people capable of making such a thing did not also make spearheads, arrow points, daggers, and needles.

The writer Marek Kohn’s explanation is that hand axes were not really practical tools at all, but the first jewelry: ornaments made by males showing off to females. Kohn argues that they show all the hallmarks of sexual selection; they are far more elaborate and (in particular) symmetrical than function demanded. They were artistry designed to impress the opposite sex, like the decorated bower built by a bowerbird or the elaborate tail grown by a peacock. That, says Kohn, explains the million years of stasis. Men were trying to make the ideal hand ax, not the best one. At least until very recently, in art and craft, Kohn argues, virtuosity, not creativity, has been the epitome of perfection. Women judged a potential mate by his design for a hand ax not by his inventiveness. The image comes to mind of the maker of the best hand ax at Boxgrove sneaking off after a lunch of horse steaks for an assignation in the bushes with a fertile female, while his friends disconsolately pick up another lump of flint and start practicing for the next occasion.

Some anthropologists go further and argue that big-game hunting itself was sexually selected. For many hunter-gatherers, it was and is a remarkably inefficient way of getting food, yet men devote a lot of effort to it. They seem more interested in showing off by bringing back the occasional giraffe leg with which to entice a woman into sex than they are in filling the larder.

I am a fan of the sexual selection theory, though I suspect it is only part of the story. But it does not solve the problem of the origin of culture; it is just a new version of the coevolution of the brain and culture. If anything, it makes the problem worse. The paleolithic troubadours whose ladies were so impressed by a well-crafted hand ax would surely have been even more impressed by a mammoth ivory needle or a wooden comb—something new. (Darling, I’ve got a surprise for you. Oh, honey, another hand ax: just what I always wanted.) Brains were growing rapidly bigger long before the Acheulean hand ax and they kept on getting bigger during its long monopoly. If that expansion was driven by sexual selection, then why were the hand axes changing so little? The truth is that however you look at it, the mute monotony of the Acheulean hand ax stands in silent reproach to all theories of gene–culture evolution: brains got steadily bigger with no help from changing technology, because technology was static.

After half a million years, technological progress is steady, but very, very slow until the Upper Paleolithic revolution, sometimes known as the “great leap forward.” Around 50,000 years ago in Europe, painting, body adornment, trading over long distances, artifacts of clay and bone, and elaborate new stone designs all seem to appear at once. The suddenness is partly illusory, no doubt, because the tools had developed gradually in some corner of Africa before spreading elsewhere by migration or conquest. Indeed, Sally McBrearty and Alison Brooks have argued that the fossil record supports a very gradual, piecemeal revolution in Africa starting almost 300,000 years ago. Blades and pigments were already in use by then. McBrearty and Brooks place the invention of long-distance trade at 130,000 years ago, for instance, on the basis of the discovery at two sites in Tanzania of pieces of obsidian (volcanic glass) used to make spear points. This obsidian came from the Rift Valley in Kenya, more than 200 miles away.

The sudden revolution of 50,000 years ago at the start of the Upper Paleolithic is clearly a Eurocentric myth, caused by the fact that far more archaeologists work in Europe than in Africa. Yet there is still something striking to explain. The fact is that the inhabitants of Europe were culturally static until then, and so, before 300,000 years ago, were the inhabitants of Africa. Their technology showed no progress. After those dates, the technology changed with every passing year. Culture became cumulative in a way that it simply was not before. Culture was changing without waiting for genes to catch up.

I am faced with a stark and rather bizarre conclusion, one that I do not think has ever been properly confronted by theorists of culture and prehistory. The big brains which make people capable of rapid cultural progress—of reading, writing, playing the violin, learning about the siege of Troy, driving a car—came into being long before much culture had accumulated. Progressive, cumulative culture appeared so late in human evolution as to have had little chance to shape the way people think, let alone the size of the brain, which had already reached a maximum with little help from culture. The thinking, imagining, and reasoning brain evolved at its own pace to solve the practical and sexual problems of a social species rather than to cope with the demands of culture transmitted from others.

I am arguing that a lot of what we celebrate about our brain has nothing to do with culture. Our intelligence, imagination, empathy, and foresight came into existence gradually and inexorably, but with no help from culture. They made culture possible, but culture did not make them. We human beings would probably be almost as good at playing, plotting, and planning if we had never spoken a word or fashioned a tool. If, as Nick Humphrey, Robin Dunbar, Andrew Whiten, and others of the “Machiavellian school” have argued, the human brain expanded to cope with social complexity in large groups—with cooperation, betrayal, deceit, and empathy—then it could have done so without inventing language or developing culture.

Yet culture does explain the ecological success of human beings. Without the ability to accumulate and hybridize ideas, people would never have invented farming, cities, medicine, or any of the things that enabled them to take over the world. The coming together of language and technology dramatically altered the fate of the species. Once they came together cultural take-off was inevitable. We owe our abundance to our collective, not our individual, brilliance.

Inexplicable as the origin of cumulative culture may be, once progress began it fed upon itself. The more technologies people invented, the more food people could catch, the more minds those technologies could support, and the more time people could spare for invention. Progress now became inevitable, a notion that is supported by the fact that cultural take-off happened in parallel in different parts of the world. Writing, cities, pottery, farming, currencies, and many other things came together at the same time independently in Mesopotamia, China, and Mexico. After 4 billion years with no literate cultures, the world suddenly had three within a few thousand years or less. It had more if, as seems likely, Egypt, the Indus Valley, west Africa, and Peru experienced cultural take-off independently. Robert Wright, whose brilliant book Nonzero explores this paradox in depth, concludes that human density played a part in human destiny. Once the continents were populated, albeit sparsely, and people could no longer emigrate to empty territory, density began to rise in the most fertile areas. With rising density came the possibility—no, the inevitability—of increasing division of labor and therefore increasing technical invention. The population becomes an “invisible brain” providing ever greater markets for individual ingenuity. And in those places where the available population suddenly shrank—such as Tasmania, when it was cut off from mainland Australia—cultural and technological progress did go suddenly into reverse.

Density itself may not matter so much as what it allows: exchange. The prime cause of that success in the human species, as I argued in my book The Origins of Virtue, was the invention of the habit of exchanging one thing for another, for with it came the division of labor. The economist Haim Ofek thinks it “not unreasonable to view the Upper Paleolithic transition as one of the first in a series of fairly successful human attempts to escape (as populations) from poverty to riches through the institution of trade and the agency of the division of labor.” He argues that what was invented at the start of the revolution was specialization. Until that point, though there may have been sharing of food and tools, there was no allocation of different tasks to different individuals. The archaeologist Ian Tattersall agrees: “The sheer diversity of material production in [early modern human] society was the result of the specialization of individuals in different activities.” Is it possible that once exchange and the division of labor were invented, progress was inevitable? Certainly a virtuous circle is at work in society today, and has been since the dawn of history, whereby specialization increases productivity, which increases prosperity, which allows technological invention, which further increases specialization. As Robert Wright puts it, “Human history involve[s] the playing of ever more numerous, ever larger and ever more elaborate non-zero-sum games.”

So long as human beings lived, like other apes, in separate and competing groups, swapping only adolescent females, there was a limit to how rapidly culture could change, however well equipped human brains were to scheme, to woo, to speak, or to think, and however high the population density was. New ideas had to be invented at home; they could not generally be brought in. Successful inventions might help their owners displace rival tribes and take over the world. But innovation came slowly. With the arrival of trade—exchange of artifacts, food, and information initially between individuals and later between groups—all that changed. Now a good tool, or a good myth, could travel, could meet another tool or myth, and could begin to compete for the right to be replicated by trade: that is, culture could evolve.

Exchange plays the same role in cultural evolution that sex plays in biological evolution. Sex brings together genetic innovations made in different bodies; trade brings together cultural innovations made in different tribes. Just as sex enabled mammals to combine two good inventions—lactation and the placenta—so trade enabled early people to combine draft animals and wheels to better effect. Without exchange, the two would have remained apart. Economists have argued that trade is a recent invention, facilitated by literacy, but all the evidence suggests that it is far more ancient. The Yir Yoront aborigines living on the Cape York peninsula were trading sting-ray barbs from the coast for stone axes from the hills through an elaborate network of trading contacts long before they achieved literacy.


All of this argument supports the conclusion that the progressive evolution of culture since the Upper Paleolithic revolution happened without altering the human mind. Culture seems to be the cart, not the horse—the consequence, not the cause, of some change in the human brain. Boas was right in holding that you can invent any and every culture with the same human brain. The difference between me and one of my African ancestors of 100,000 years ago is not in our brains or genes, which are basically the same, but in the accumulated knowledge made possible by art, literature, and technology. My brain is stuffed with such information, whereas his larger brain was just as stuffed but with much more local and ephemeral knowledge. Culture-acquiring genes do exist; but he had them too.

What was it that changed about 200,000 to 300,000 years ago to enable human beings to achieve cultural lift-off in this way? It must have been a genetic change, in the banal sense that brains are built by genes and something must have changed in the way brains were built. I doubt that it was merely a matter of size: a mutation in the ASPM gene allowing an extra 20 percent of gray matter. More likely it was some change in wiring that suddenly allowed symbolic or abstract thinking. It is tempting to believe that FOXP2, by rewiring the language organ, somehow started the flywheel of exchange. But it seems just too fortunate for science to have stumbled on the key gene so early in its search, so I do not think FOXP2 is the answer. I predict that the changes were in a small number of genes, simply because the lift-off is so sudden, and that before long science may know which ones.

Whatever the changes were, they enabled the human mind to take novelty in its stride much more than before. We are not selected to make minute predictive adjustments to a steering wheel while moving at 70 miles an hour, or to read handwritten symbols on paper, or to imagine negative numbers. Yet we can all do these things with ease. Why? Because some set of genes enables us to adapt. Genes are cogs in the machine, not gods in the sky. Switched on and off throughout life, by external as well as internal events, their job is to absorb information from the environment at least as often as to transmit it from the past. Genes do more than carry information; they respond to experience. It is time to reassess the very meaning of the word “gene.”


If human nature did not change when culture changed—Boas’s central insight, proved by archaeology—then the converse is also true: cultural change does not alter human nature (at least not much). This fact has bedeviled utopians. One of the most persistent ideas in utopias is the abolition of individualism in a community that shares everything. Indeed, it is almost impossible to imagine a cult without the ingredient of communalism. The hope that the experience of a communal culture can change human behavior flowers with special vigor every few centuries. From dreamers like Henri de Saint-Simon and Charles Fourier to practical entrepreneurs like John Humphrey Noyes and Bhagwan Shree Rajneesh, gurus have repeatedly preached the abolition of individual autonomy. The Essenes, Cathars, Lollards, Hussites, Quakers, Shakers, and hippies have tried it, not to mention the many sects too small to have memorable names. And there is one identical result: communalism does not work. Again and again, in accounts of these communities, what brings them down is not the disapproval of the surrounding society—though that is strong enough—but the internal tension caused by individualism.

Usually, this tension first develops over sex. It seems impossible to condition human beings to enjoy free love and abolish their desire to be both selective and possessive about sexual partners. You cannot even weaken this jealousy by rearing a new generation in a sharing culture: the jealous individualism actually gets worse in the children of the commune. Some sects survive by abolishing sex—the Essenes and Shakers were strictly celibate. This, however, leads to extinction. Others go to great lengths to try to reinvent sexual practice. John Noyes’s Oneida community in upstate New York in the nineteenth century practiced what he called “complex marriage” in which old men made love to young women and old women to young men, but ejaculation was forbidden. In his ashram at Poona, the Rajneesh initially seemed to have gotten free love going nicely. “It is no exaggeration to say that we had a feast of f***ing, the likes of which had probably not been seen since the days of Roman bacchanalia,” boasted one participant. But that ashram, and the ranch in Oregon which followed it, were soon torn apart by jealousy and feuds, not least over who got to sleep with whom. The experiment ended, 93 Rolls-Royces later, with attempted murder, mass food poisoning to gerrymander a local election, and immigration fraud.

There are limits to the power of culture to change human behavior.

Previous: 7. Learning Lessons
Next: 9. The Seven Meanings of “Gene”