Book: The Agile Gene

Previous: 4. The Madness of Causes
Next: 6. Formative Years

If we follow a particular recipe, word for word, in a cookery book, what finally emerges from the oven is a cake. We cannot now break the cake into its component crumbs and say: this crumb corresponds to the first word in the recipe; this crumb corresponds to the second word in the recipe.

Richard Dawkins

The job of curator of the mollusc collection at the natural history museum of Geneva is not to be sniffed at. When it was offered to Jean Piaget, he was well qualified, having published nearly 20 papers on snails and their cousins. But he turned it down, and for a good reason: he was still a schoolboy. He went on to do a doctorate on Swiss molluscs before his godfather, alarmed at his obsession with natural history, diverted him from malacology to philosophy first in Zurich and then at the Sorbonne. However, Piaget’s fame rests on his third career, begun at the Rousseau Institute in Geneva in 1925: as a child psychologist. Between 1926 and 1932, still precocious, he published five influential books on the minds of children. It is to Piaget that modern parents owe their obsession with the idea that little Johnny must meet his developmental milestones.

Piaget was not the first person to observe children as if they were animals—Darwin did the same with his own children—but Piaget was probably the first to think of them not as apprentice adults but as a species equipped with a characteristic mind. The “errors” five-year-old children made in answer to questions on intelligence tests revealed to Piaget the peculiar but consistent ways in which their minds worked. In trying to answer the question “How does knowledge grow?” he saw a progressive, cumulative construction of the mind during childhood in response to experience. Each child goes through a series of developmental stages, always in the same order, though not always at the same rate. First comes the sensorimotor stage, when the infant is little more than a bundle of reflexes and reactions; it cannot yet conceive that objects still exist when hidden. Next comes the preoperational stage, a time of egocentric curiosity. Then comes the stage of concrete operations. And last, on the brink of adolescence, comes the dawn of abstract thought and deductive reasoning.

Piaget realized that development is more continuous than this outline implies. But he insisted that just as children will not walk or talk until they are “ready,” so the elements of what the world calls intelligence are not merely absorbed from the outside world; they appear when the developing brain is ready to learn them. Piaget saw cognitive development neither as learning nor as maturation, but as a combination of the two, a sort of active engagement of the developing mind with the world. He thought the mental structures necessary for intellectual development are genetically determined, but the process by which the maturing brain develops requires feedback from experience and social interaction. That feedback takes two forms: assimilation and accommodation. A child assimilates predicted experiences and accommodates to unexpected experiences.

In terms of nature and nurture, Piaget, alone among the men in my photograph, defies categorization as an empiricist or a nativist. Where his contemporaries Konrad Lorenz and B. F. Skinner took up extreme positions, the first as a champion of nature, the second of nurture, Piaget picked a careful path right through the middle. With his emphasis on development through stages, Piaget vaguely prefigured the ideas of formative experiences in youth. He was wrong in many particulars. His hypothesis that a child understands the spatial properties of objects only by handling them has been disproved. Spatial understanding seems to be much closer to innate than that—even very small babies can understand spatial properties of things they have never handled. Nonetheless, Piaget deserves some credit for being the first to take seriously the fourth dimension of human nature—the time dimension.


This concept, rediscovered a little later by zoologists, came to play a central role in one of the most illuminating of the debates over nature and nurture, the debate between Konrad Lorenz and Daniel Lehrman in the 1950s and 1960s. Lehrman was an ebullient and articulate New Yorker with a passion for bird-watching, who made a discovery about the behavior of ring doves that had broad implications for human beings as well. He found that the male dove’s courtship dance triggers a change in the female dove’s hormones. Thus, an external experience can cause, via the nervous system, an internal, biological change in the organism. Lehrman did not know it, but such a response is mediated by the switching on and off of genes.

In 1953, before the climax of his work on doves, Lehrman decided to use his halting German, learned while he was decoding radio intercepts for American intelligence in the Second World War, to translate Lorenz’s work into English—in order to criticize it. His powerful critique was to influence a generation of ethologists. Even Niko Tinbergen would moderate his views after reading Lehrman. The Austrian Lorenz had been championing instinct—the idea that some behavior is innate in the sense that it will emerge even if the animal is insulated from its normal environment from birth. Most animals, said Lorenz, were driven to elaborate and sophisticated behavior patterns, not by their experience but by their genes. In his critique Lehrman charged that Lorenz had omitted all mention of development: of how the behavior came to be. It did not spring fully formed from the gene; the genes built a brain, which absorbed experience before it emitted behavior. In such a system, what is meant by the word “innate”?

Lorenz replied at length, and Lehrman responded again, but the two were largely at cross-purposes. According to Lehrman, the fact that a behavior is the product of natural selection does not mean it is “innate”—meaning produced without experience. Before a dove can develop a preference for mating with its own species, it needs to experience a parent dove; the same is not true in a cowbird, which like a cuckoo never sees its parents and therefore does have a truly “innate” preference for a mate. Lorenz hardly cared how the behavior was produced so long as it was obviously a result of natural selection and was expressed in the adult animal in much the same way given normal experience. For him, innate meant inevitable. Lorenz was always going to be more interested in the why than the how.

Tinbergen resolved the issue to the satisfaction of many when he said that a student of animal behavior should ask four questions about a particular behavior: What are the mechanisms that cause the behavior? How does the behavior come to develop in the individual (Lehrman’s question)? How has the behavior evolved? What is the function or survival value of the behavior (Lorenz’s question)?

The argument was cut short by Lehrman’s death in 1972. Yet in recent decades Lehrman’s developmental argument has become something of a standard for rallying those who think the nativists of behavior genetics and evolutionary psychology have gone too far. The “developmentalist challenge” takes many forms, but its central charges are that many modern biologists talk much too glibly about “genes for” behavior, ignoring the uncertainty, complexity, and circularity of the system through which genes come to influence behavior. According to the philosopher Ken Schaffner, a five-point manifesto of the developmentalist challenge might go something like this: (1) genes deserve parity with other causes; (2) they are not “preformationist”; (3) their meaning depends heavily on context; (4) the effects of genes and environments are seamless and inseparable; and (5) the psyche “emerges” unpredictably from the process of development.

In its strongest form, as presented by the zoologist Mary Jane West-Eberhard, the challenge claims to present a “second evolutionary synthesis” that will overthrow the first—the fusion of Mendel and Darwin that came about in the 1930s—by elevating the mechanisms of development alongside those of genetics. For instance—and this is my example—take a glance at the pattern of blood vessels on the back of your hands. Although the veins get to the same destinations on both hands, they get there by slightly different routes. This is not because there are different genetic programs for the different hands, but because the genetic program is flexible: in some way it delegates local steering to the vessels themselves. Development accommodates to the environment: it is capable of coping with different circumstances and still achieving a result that works. If different developments can result from the same set of genes, then different genes might also be capable of achieving the same outcome. Or to put it in technical terms, development is well “buffered” against minor genetic changes. This might explain two intriguing phenomena. First, wild breeds, such as wolves, are much less sensitive to individual genetic mutations than inbred forms such as pedigreed dogs: they are buffered by their genetic variation. In turn, this might explain the otherwise puzzling fact that there are so many different versions of each gene about in the population (in human beings as well as other wild animals). Many genes come in two slightly different versions, one on each equivalent chromosome, which may help to provide the flexibility to develop a working body in different environments.

The development of behavior need be no less flexible and buffered than the development of anatomy. In its weaker form, the developmentalist challenge is merely a reminder to behavior geneticists not to draw simplistic conclusions, and not to encourage newspaper headline writers to speak of “gay genes” or “happiness genes.” Genes work in huge teams and build the organism and its instincts not directly but through a flexible process of development. Those who actually study genes and behavior—in mice, flies, and worms—say they are well aware of the dangers of oversimplification, and they are sometimes a little irritated by the developmentalists. As much as they emphasize its complications and flexibility, even development is still at root a genetic process. Experiments confirm the complexity, plasticity, and circularity of the system but also reveal that even the environment affects development only by switching genes on and off—genes that allow plasticity and learning. Ralph Greenspan, a pioneer of the study of courtship among fruit flies, put it this way:

Just as the ability to carry out courtship is directed by genes, so too is the ability to learn during the experience. Studies of this phenomenon lend further support to the likelihood that behavior is regulated by a myriad of interacting genes, each of which handles diverse responsibilities in the body.


Once you try to think about the fourth dimension of the organism, several useful parables come to mind, all of them rather graphic. Metaphor, in my view, is the lifeblood (ha!) of good scientific prose, so I shall explore two of these parables at length.

The first is the parable of canalization, coined by the British embryologist Conrad Waddington in 1940. Consider a ball at the top of a hill. As it rolls down, the hill is smooth at first, but after a while gullies begin to appear in the surface; before long the ball is rolling down a narrow channel. On some hills the gullies converge into one channel; on others, they diverge into several channels. The ball is the animal. The hill with the converging gullies represents the development of the most “innate” kind of behavior: this behavior will always turn out roughly the same whatever the organism’s experience. The hill with the diverging gullies represents behavior that is much more “environmentally” determined. Yet both kinds of behavior still require genes, experience, and development to appear at all. So, for instance, grammar is highly canalized; vocabulary is not. The formulaic song of a wren—which I just heard outside my window—is much more canalized than the imitative and inventive song of the thrush I can also hear.

Equating innate behavior with canalized development is a useful, if limited, idea, not least because it cuts across the dichotomy between genes and environment so cleanly: something can be well specified by genes and still thrown into a different channel by the environment. If personality and IQ are highly heritable in most kinds of society (), this implies that their development is narrowly canalized—it would take a very different environment to throw the ball so far off track as to end up in a different channel. But this does not mean that the environment is unimportant: the ball still needs a hill to roll down.

For my next sermon, I will expatiate upon a different parable, one that dates from 1976, when it was coined by Pat Bateson, a British ethologist much influenced by Lehrman. This is the parable of the kitchen:

The processes involved in behavioral and psychological development have certain metaphorical similarities to cooking. Both the raw ingredients and the manner in which they are combined are important. Timing also matters. In the cooking analogy, the raw ingredients represent the many genetic and environmental influences, while cooking represents the biological and psychological processes of development.

The kitchen analogy has proved popular with both sides of the argument over nature and nurture. Richard Dawkins used the metaphor of baking a cake in 1981, while emphasizing the role of genes; his archcritic Steven Rose used the same metaphor three years later while arguing that behavior is “not in our genes.” Cooking is not a perfect metaphor—it fails to capture the alchemy of development in which two ingredients lead automatically to the production of a third and so on—but it deserves its popularity, for it expresses the fourth dimension of development very well. As Piaget noticed, the development of a certain human behavior takes a certain time and occurs in a certain order, just as the cooking of a perfect soufflé requires not just the right ingredients but also the right amount of cooking and the right order of events.

Likewise, the metaphor of cooking instantly explains how a few genes can create a complex organism. Douglas Adams, the science fiction writer, sent me an email shortly before his untimely death, criticizing the argument that 30,000 genes were too few to specify human nature. He suggested that the blueprint of a cake, such as an architect would need, would indeed be an immensely complicated document, requiring an exact vector for each raisin, an exact description of the shape and size of each dollop of icing, and so on. If the human genome were a blueprint, then even 30,000 genes would never be sufficient to specify a body, let alone a psyche. The recipe for a cake, on the other hand, is a simple paragraph. If the genome were a recipe—a set of instructions for “cooking” the raw ingredients in certain ways for certain lengths of time—then 30,000 genes would be ample. One cannot only imagine such a process in the growing of a limb; one can now actually see the rudiments of how it works, gene by gene, emerging from the scientific literature.

But can you imagine such a thing for behavior? Most people’s minds boggle at the thought of molecules, made by genes, generating an instinct in the mind of a child, so they give up and call the process impenetrable. I have now set myself a considerable challenge: to explain how genes can cause the development of behavior. In this book so far I have had a stab at showing how a pair-bonding instinct is manifest in oxytocin receptor genes, and how personality is affected by BDNF genes. These are useful systems to analyze. But they raise an enormous question: how did the brain get to be built that way in the first place? It is all very well to say that oxytocin receptors expressed in the medial amygdala fire up the dopamine system with sensations of personal addiction toward the loved one. But who built the darned machine this way, and how?

Think of the Genome Organizing Device as a skilful chef, whose job is to bake a soufflé called the brain. How does it go about this task?


Consider, first, the sense of smell. At the perceptual level smell is a genetically determined sense: one gene, one scent. The mouse has 1,036 different olfactory sensors in its nose, each expressing a slightly different olfactory receptor gene. Human beings, in this respect as in many others, are impoverished: they have only 347 intact olfactory receptor genes, plus many rusting hulks of old genes (called pseudogenes). In the mouse, each cell then sends a single nerve fiber (an axon) to a different unit within the olfactory bulb of the brain. Remarkably, the cells expressing one kind of receptor gene all send their axons to just one or two units.

So, for instance, the P2 neurons in the mouse’s nose—several hundred of them—all express the same receptor gene and supply all their electrical output to stimulate just two foci in the brain. There is a steady turnover in the neurons, which live for only 90 days. Their replacements grow into the brain and reach exactly the same spot as their predecessors. A team in Richard Axel’s laboratory at Columbia University hit upon the devastating idea of killing all the P2 cells (by making them, and only them, express diphtheria toxin) and then seeing if their replacements could still find their way with no “colleagues” to hold their hands along the way. They could.

This might explain why smells are so evocative. The olfactory neurons are so faithful to the same brain foci that even though the neurons of childhood are long gone, their adult replacements follow exactly the same course into the brain. When Axel and his colleagues removed the odorant receptor gene from P2 cells, they no longer grew to their target but wandered aimlessly in the brain. When Axel replaced the P2 odorant receptor gene with one from P3, the axon now found its way directly to the P3 target. This proves that the development of a specific sense of smell requires a gene expressed in the nose, and a gene expressed in the brain that matches it, the axons growing to it make the link.

The first insight to explain how this comes about was the work of a rather romantic contemporary of my 12 hairy men. Santiago Ramon y Cajal (1852–1934) was everything that a Spanish hero should be: artistic, flamboyant, restless, and athletic. Cajal convinced the world that the brain is made not of a continuous network of interconnected nerve fibers, but of many separated cells, each touching but not merging with others. He gets slightly more credit for this discovery than he deserves, since it was an insight shared by at least five other scientists, including the Norwegian explorer and statesman Fridtjof Nansen. But Nansen had quite enough to be famous for, so give Cajal his due. However, it was Cajal’s other intuition that interests me here. Cajal suggested that the nervous system is built by nerves growing toward chemicals that attract them. He suspected that nerves are lured to their destinations by gradients of some special substance. In this he was absolutely right.

Like one of Macbeth’s witches, I must now add to my recipe the eye of a frog. Frogs have binocular vision: they can look forward with two eyes, all the better to do range-finding on passing flies. Tadpoles, however, have eyes on the sides of their head. Since the tadpole grows into a frog, the eyes have to move into their new positions halfway through life. Problem: now the two eyes’ fields overlap so that they see the same scene. The frog’s brain must take the inputs from the left half of each eye and send them to the same part of the brain for processing together. Meanwhile the right half of the visual field of each eye must be analyzed in a different place. To do this, the GOD must change the wiring from the eye to the brain. The nerve cells from one half of each eye must cross over to the contralateral side of the brain, and those from the other half must stay on the same side. Amazingly, thanks to the work of Christine Holt and Shin-ichi Nakagawa, it is possible to describe exactly how this is done.

Each cell in the retina of the eye grows an axon toward the “optic tectum” of the brain. At the tip of the axon is an object called a growth cone, which seems to be a sort of locomotive for the axon, capable of pulling the tip of the axon in a straight line, or turning or stopping. It does each of these maneuvers in response to chemicals that attract and repel it. When the growth cones from the tadpole’s eye reach the optic chiasm, a sort of crossroads or points junction, they cross over each other so that the right half of the tadpole’s brain responds to the left eye and vice versa. But once the tadpole starts to become a frog, something changes at the chiasm. Now the nerves from the left half of the right eye and the left half of the left eye must end up in the same place, and the right halves in another place, so that the frog can see in stereo, the better to judge the distance of passing flies. New neurons grow from each retina to the brain, but this time half of them cross over the chiasm while the other half continue into the same side of the brain. Holt and Nakagawa discovered how this change is effected. A gene is switched on within the chiasm: the gene for a protein called ephrin B, which repels the growth cones. It repels only the growth cones coming from one half of each eye because only half the cells are expressing the gene for the ephrin B receptor. The repelled cones continue into the same side of the brain as the eye they came from. The cells from the other half of the eye, not expressing the receptor, ignore the signal from ephrin B and cross to the contralateral side of the brain. The effect is to give the frog binocular vision so that it can range-find flies.

Using just two genes—ephrin B and the ephrin B receptor—expressed in the right pattern in the right places at the right times, the frog has acquired the wiring that gives it binocular vision. Exactly the same genes are expressed in exactly the equivalent places in a fetal mouse, whereas in a fish or a chick the genes remain silent and no binocular vision is achieved—which is just as well, since fish and chickens have eyes on the sides of their heads, not in the front.

Ephrin B is an “axon guide,” one of a surprisingly small number of such proteins. There are four common families of axon guidance proteins: netrins, ephrins, semaphorins, and slits. Netrins generally attract axons, while the others generally repel them. Some other molecules also act as axon guides, but the number is not large. Yet it is beginning to look as if these happy few are almost all that are needed in brain-building, because the same four kinds of axon guides are cropping up wherever scientists look, repelling or attracting growth cones—and in almost all animals, including the lowliest worms. It is a system of mind-boggling simplicity, yet it seems to be capable of producing a human brain with a trillion neurons, each making a thousand connections.

Indulge me in one more case history from the molecular biology of axon guidance before I let you climb back up into psychology for air. In fruit flies, as in frogs, some axons are required to cross the midline of the animal to the other side of the brain. To do so, they need to suppress their sensitivity to “slit,” a repulsive axon guide stationed at the midline. An axon that wishes to cross the midline must suppress its expression of a gene called “robo,” which encodes the receptor for slit. This suppression makes the axon insensitive to slit, allowing it free passage through the midline checkpoint. Once the axon has crossed, robo switches back on, preventing recrossing. The axon may then switch off extra robo genes (called robo2 and robo3), which determine how far from the midline it will go. The more robos it switches off, the farther from the midline it will travel.

Although these genes were found in flies, it was no surprise when a mutant zebra fish soon turned up with the exact equivalent of the robo3 gene not working and with problems at the midline nerve crossings. Then came three slits and two robos in mice, again doing exactly the same job, directing traffic at the midline during the formation of the forebrain. In mice, however, the slits may do more: they may actually channel axons toward particular regions of the brain. It appears that slit and robo genes keep switching on and off in different parts of the rodent brain long after birth, guiding axons to their destinations. Since, with respect to such genes, people are just big mice, this looks like a breakthrough in understanding how human mental networks are built.

You may think this is a long way from behavior, and it surely is. My purpose so far is merely to show in outline how genes might set about building a brain according to a very complicated recipe but one that applies a few simple rules—and to show the fourth dimension of genetics, the dimension of time. I do not mean to imply that brain development is now fully understood and scientists are just filling in the details. Far from it. As always in science, the more scientists know, the more they realize they do not know. Until now fog hid the view before us. All that has happened is that it has partly dissolved to reveal glimpses of a giddy abyss of ignorance. I cannot begin to tell you how netrin and ephrin are affected by experience, for example, or how a cuckoo’s brain is equipped by these axon guides with the instinct to sing “cuckoo.” But a start has been made. And I cannot resist pointing out that this beginning has come about through genetic reductionism. To try to understand the construction of the mind without considering the individual genes involved in axon guidance would be like trying to create a forest without planting any trees.


The axon guides, standing at their guideposts directing the passing growth cones according to their receptors, are only part of the story. They explain how nerves get where they want to go but cannot explain how nerves make the right connections when they get there. It is time for another parable. Suppose a woman from London is offered a job trading bonds in New York. She migrates to New York by responding to certain signals at guideposts along the way (the railway station, the terminal, the check-in desk, the gate, the arrivals hall, the taxi stand, the hotel, the subway, and so on) until she reaches the offices of her new employer. Here, suddenly, she switches to a different kind of navigating: she connects with her new boss and her future colleagues, some of whom have also traveled from afar to that office. She finds them not by directional cues but by personal cues—name and job. In much the same way, the GOD, having guided an axon to its destination, must connect it with appropriate other neurons on arrival. The cues are no longer directional signs but badges of identity.

In the late 1980s scientists chanced upon the first example of a gene that tells a migrating axon when it has reached its destination. The story begins in 1856, when a Spanish doctor, Aureliano Maestre de San Juan, carried out a postmortem on a 40-year-old man who had no sense of smell, a small penis, and very small testes. In the man’s brain San Juan could find no olfactory bulbs. A few years later another case turned up in Austria, and doctors began to ask men with minute penises if they had a sense of smell. Excitable sexologists took these cases as evidence that noses and penises had as much in common as met the eye. In 1944, Franz Kallmann, a psychologist I mentioned in , described the syndrome of small gonads and no sense of smell as a rare genetic disorder, running in families but affecting mainly men. Somewhat unfairly, the syndrome is now named after Kallmann and not the polynomial Spaniard: that’s what you get for having so many names.

The search for the genes involved in Kallmann syndrome zeroed in on the X chromosome (of which men have no spare copy because they inherit it from the mother only) and soon focused on a gene called KAL-1. There are almost certainly two other genes on other chromosomes that can also cause Kallmann syndrome, but they remain unidentified. In recent years, it has become clear how KAL-1 works and what happens when it is broken. The gene is switched on about five weeks after conception not in the nose or the gonads but in the part of the embryonic brain that will become the olfactory bulb. It produces a protein called anosmin, which acts as a cell-adhesion molecule—that is, it causes cells to stick to each other. Anosmin somehow has a dramatic effect on the growth cones of migrating olfactory axons heading for the olfactory bulb. As these growth cones arrive at the brain in the sixth week of life, the presence of anosmin causes them to expand and to “defasciculate,” or derail. Each axon leaves its tracks and stops, connecting with the cells nearby. In people who have no working copy of KAL-1, and no anosmin, the axons never make a connection with the olfactory bulb. Feeling unwanted, they wither away.

Hence the lack of a sense of smell in people who have Kallmann syndrome. But why the small penis? Astonishingly, it appears that the cells necessary for triggering sexual development also begin life in the nose, in an evolutionary ancient pheromone receptor called the vomeronasal organ. Unlike the olfactory neurons, which merely send axons to the brain, these neurons themselves migrate to the brain. They do so along the fascicules—the rails—already laid down by the olfactory axons. In the absence of anosmin, they never reach their target and never begin their main task: the secretion of a hormone called gonadotropin-releasing hormone. Without this hormone, the pituitary gland never gets its instruction to start releasing luteinizing hormone into the blood; and without luteinizing hormone the gonads never mature, the man has low testosterone levels and therefore low libido, and he remains sexually indifferent to women even after puberty.

At last I have found a way to trace the pathway from a gene to a behavior via the building of a part of the brain. Pat Bateson cites Kallmann syndrome to stress that though genes can indeed influence behavior, the connections are tortuous and indirect. To call KAL-1 “the gene for” sexual dysfunction would be misleading, not least because it creates the dysfunction only when not working. Besides, anosmin probably has several other functions in the body. Its effect on sexual development is indirect. And there are several other genes that can go wrong and cause some or all of the same symptoms, and that are probably working at other points along the extended sequence of causes and effects. Indeed, the majority of inherited cases of Kallmann syndrome are caused by mutations in genes other than KAL-1.

Although there is no one-to-one correspondence between genes and behavior (but rather many-to-many), nevertheless KAL-1 is still, in a cautious and accidental sense, “one of the genes for” part of sexual behavior. Just as Lehrman and Piaget might have argued, it manifests its behavioral effect via the physical development of the nervous system. The gene specifies how development occurs, and that in turn specifies how behavior occurs. The spooky truth is dawning on scientists that they can regard behavior as just an extreme form of development. The nest of a bird is just as much a product of its genes as its wings are. In my garden and all over Britain song thrushes line their nests with mud, blackbirds with grass, robins with hair, and chaffinches with feathers, generation after generation, because nest building is an expression of genes. Richard Dawkins coined the phrase “extended phenotype” for this idea.

I mentioned that anosmin is a cell-adhesion molecule, and this makes it one of the most intriguing items in the GOD’s portfolio of gene products. It is early yet in understanding the role played by cell-adhesion molecules; but it seems increasingly plausible that these molecules are the badges by which neurons identify their colleagues when the brain is being wired. They are the key to how cells find each other in the crowd. I justify this highly speculative assertion on the basis of the following experiment, probably the most ingenious I have yet encountered in the study of genes and brains.

The impresario of the experiment is Larry Zipursky; the subject is a simple fruit fly. Flies have compound eyes—that is, their eyes are divided into 6,400 little hexagonal tubes, each focused on one small part of the scene. Each of these tubes sends precisely eight axons to the brain to report on what it sees—mainly movement. Six of these axons respond best to green light; the seventh responds to ultraviolet light; the eighth responds to blue light. The first six stop at an early layer of the brain; the seventh and eighth penetrate deeper, the seventh going deepest into the brain. Zipursky first showed that, almost certainly, for all eight of these cells to reach their targets the gene for N-cadherin (a cell-adhesion protein) must be switched on in the eight cells and also in their targets. What his team then did, almost incredibly, was to genetically engineer a fly so that a few of the seventh cells express only a mutant version of the N-cadherin gene, and they, and only they, turn fluorescent-green, allowing the experimenter to distinguish between the development of a mutant and normal cell in the same animal. The details of how this is achieved are impressive: they show that science is still a setting for ingenuity and virtuosity. Without N-cadherin, the seventh axon develops normally and reaches its target, but then fails to make a connection, retracts, and seems to become disoriented. Zipursky repeated the experiment with the first six neurons, and they too could not find their destination when the N-cadherin gene was not working. He concludes that N-cadherin (and, after a similar experiment, another gene called LAR, also a cell-adhesion gene) is necessary for an axon to recognize its target in the brain.

Cadherins and their kind are currently among the most glamorous molecules in biology. They owe this reputation to the role they are thought to play in enabling neurons to find each other during the wiring of the brain. They stick out of the surface of neurons like fronds of kelp from the seabed. In the presence of calcium, they stiffen into rods and grab hold of similar cadherins from neighboring cells. Their job seems to be to bind two neurons together. But they will bind to each other only if their tips are compatible, and the Genome Organizing Device seems to go to great lengths to vary the tip of the frond between different cells. This is partly because there are many different cadherin genes, but it is partly due to an entirely different phenomenon named alternative splicing. Bear with me while I take you on a tour of the workings of genes. A gene is a stretch of DNA letters encoding the recipe for a protein. In most cases, however, the gene is broken up into several short stretches of “sense” interrupted by long stretches of nonsense. The sense bits are called exons and the nonsense bits introns. After the gene has been transcribed into a working copy made of RNA and before it has been translated into protein, the introns are removed in a process called splicing.

This was discovered in 1977 by Richard Roberts and Philip Sharp and earned them a Nobel Prize. Walter Gilbert then realized that there was more to splicing than merely cutting out the nonsense. In some genes, there are several alternative versions of each exon, lying nose to tail, and only one is chosen; the others are left out. Depending on which one is chosen, slightly different proteins can be produced from the same gene. Only in recent years, however, has the full significance of this discovery became apparent. Alternative splicing is not a rare or occasional event. It seems to occur in approximately half of all human genes; it can even involve the splicing in of exons from other genes; and in some cases it produces not just one or two variants from the same gene but hundreds or even thousands.

In February 2000, Larry Zipursky had asked one of his graduate students, Huidy Shu, to look at a molecule called Dscam, a gene product recently purified in the fly by Jim Clemens and shown by Dietmar Schmucker to be required for guiding fruit-fly neurons to their targets in the brain. One part of the fly gene looked disappointingly different in one small region from its human equivalent, a gene that probably causes some of the symptoms of Down syndrome by an unknown mechanism (Dscam stands for Down syndrome cell-adhesion molecule). Shu began looking for alternative forms of Dscam that might contain regions of sequence similar to the human gene; and while no such sequence was identified, every one of the 30 or so forms of Dscam that Shu sequenced was—surprisingly—different. Then suddenly, for the first time, the entire fruit-fly genome became available over the Internet from the Celera corporation. That weekend Shu and Clemens used the database to read the Dscam gene. They could not believe their eyes when the result of the search came through. There were not a few alternative exons; there were 95. Of the 24 exons in the gene, four existed in alternative versions: exon 4 came in 12 different versions, exon 6 in 48, exon 9 in 33, and exon 17 in two. This meant that if the gene were to be spliced into every possible combination of exons, it could produce 38,016 different kinds of protein—from one gene!

News of the Dscam discovery spread quickly through the community of geneticists. Many genome experts found it rather depressing, for it suddenly made the situation much more complicated. If a single gene could make thousands of proteins, then listing human genes would be only the very beginning of the task of listing the number of proteins they could produce. On the other hand, such complexity made nonsense of the argument that the comparatively few genes in the human genome meant the genome was too simple to explain human nature, and so people must be the product of experience instead. Those who argued this way were suddenly hoist with their own petard. Having argued that a genome of 30,000 genes was too small to determine the details of human nature, they would have to admit that a genome which could produce hundreds of thousands, perhaps even millions, of different proteins had easily enough combinatorial capacity to specify human nature in excruciating detail, without even bothering to use nurture.

It is important not to get carried away. Few other alternatively spliced genes show such potential diversity. At the time of writing none of the several human versions of Dscam has yet proved to be alternatively spliced at all, let alone to such a degree. Nor is it yet known that fruit flies make all 38,016 of the proteins that they could make from Dscam. It remains possible that all 48 versions of exon 6 are functionally interchangeable. But Zipursky already knows that different alternatives of exon 9 are found preferentially in different tissues, and he suspects that the same may be true of the other exons. There is a pervasive feeling among the scientists working on this topic that they are scratching at the door of a chamber of secrets. How genes splice themselves and how RNA behaves in the cell may hold the key to some fundamentally new biological principles.

In any case, Zipursky hopes he may have hit upon a molecular basis for cell recognition: for how neurons find each other in the crowded brain. Dscam is similar in structure to an immunoglobulin, a highly variable protein used in the immune system to identify many different pathogens. Recognizing pathogens might be rather similar to recognizing neurons in the brain. Cadherins and another kind of cell-adhesion molecule used in the brain—protocadherins—also exhibit immunoglobulin-like features. They use alternative splicing that would enable them to have highly specified identity badges. Moreover, the proteins they produce all stick out of cells, waving their variable tails, and stick to each other by matching those tails. Once stuck together with a similar protein from another cell, the tails form a rigid bridge. This looks increasingly like a system whereby like finds like: cells that express the same exons can bind together and form synaptic connections.

In particular, the protocadherins look highly intriguing. Their genes are arranged, head to tail, in three clusters on human chromosome 5, nearly 60 genes in all. Each gene contains a string of variable exons from which to choose, and each exon is controlled by a separate promoter. Protocadherins may even rearrange their genetic message by alternative splicing not within one gene transcript but between different gene transcripts. This gives the brain potentially not thousands but billions of different protocadherins. Neighboring cells in the brain of very similar types end up expressing slightly different protocadherins. “Protocadherins may therefore provide the adhesive diversity and molecular code for specifying neuronal connections in the brain,” according to two of their champions at Harvard.

More than 40 years ago a neuroscientist, Roger Sperry, set out to topple the prevailing consensus, championed by his own supervisor, that the brain was created by learning and experience from an undifferentiated, almost random network of neurons. On the contrary, he found that a nerve gets its identity early in development and cannot easily be reprogrammed. By severing and regenerating nerves in salamanders, he proved that each neuron finds its way to the same place as its predecessor. By rewiring the brains of rats and frogs, he proved that there was a limit to the plasticity of the animal mind: a rat rewired so that its right foot was now connected with the nerves from its left would continue to move its left foot if the right foot was stimulated. By stressing the determinism in the nervous system, Sperry brought about a nativist revolution in neuroscience that paralleled Chomsky’s in psychology. Sperry even postulated that each neuron would have a chemical affinity for its target and the brain would prove to be built by a large number of variable recognition molecules. In this he was far ahead of his time (his Nobel Prize was for other, lesser work).


The story of development, then, seems at first to lead to a conclusion rather different from that which Piaget and Lehrman expected. Just as the study of twins was expected to reveal a large role for the environment and a small role for genes but found the opposite, so development seems to be a rather well determined process planned and plotted by genes. Am I to conclude that nature wins this particular argument and that the developmentalist’s challenge therefore fails?

No. For one thing, a deterministically constructed machine can still be modified. My computer has exquisitely specified circuitry, but that does not stop it from modifying the activity of its connections in response to a new program. Besides, neural plasticity is back in fashion since Sperry’s day. This is partly because of a rebound, which is typical in the nature-nurture issue: today’s scientists are reacting to what they see as excessive nativism, just as Sperry was reacting to what he saw as excessive empiricism. But there is more to it than that. For many years it was orthodoxy, apparently proved by the neuroscientist Pasco Rakic, that animals grew no new neurons in the cortex of the brain after reaching adulthood. Then Fernando Nottebohm found that canaries make new neurons when they learn new songs. So Rakic said that mammals grow no new neurons, whatever birds do. Then Elizabeth Gould found that rats do. So Rakic retreated to primates. Gould found new neurons in tree shrews. So Rakic said it was higher primates. Gould found them in marmosets. So it was Old-World primates that could not grow them. Gould found them in macaques. Now it is certain that all primates, including human beings, can grow new cortical neurons in response to rich experiences, and lose neurons in response to neglect. There is ample and growing evidence that, for all the determinism in the initial wiring of the brain, experience is essential for refining that wiring. In Kallmann syndrome, the olfactory bulbs wither away for lack of use. The old public accounting principle for how to handle a government grant—“use it or lose it”—seems to apply to the mind as well.

Notice a tendency to accentuate the negative. The best way to prove the importance of experience is to deprive an animal of it. In the visual cortex, an eye blindfolded at birth soon loses its receptive field in the brain to the other eye (more on this in ). However, as I write, Hollis Cline has just produced the first experimental evidence of how experience positively affects the development of the brain. She studies the way a neuron from the eye behaves when it nears its target in the brain. Far from homing in on its goal in a predetermined way, it throws out a whole “arbor” of feelers, many of which are soon retracted. It seems to be looking for connections that “work”—connections between like-minded neurons that fire together. Cline compared neurons in the visual system of a developing tadpole after four hours of light stimulation or four hours of dark and showed that the cell had thrown out far more feelers looking for contacts in the light. “I’ve got a stimulus,” the neuron cries, “I want to share the news.” This may be how experience actually affects the development of the brain, just as Piaget argued. Cline’s colleague Karel Svoboda has actually watched through a window in the skull as synapses between the brain cells of a mouse form and dissolve in response to experience.

The whole point of education is surely to exercise those brain circuits that might be needed in life—rather than to stuff the mind full of facts. Thus exercised, they flourish. Astonishingly, this is something human beings share with microscopic worms. The nematode worm Caenorhabditis elegans is the reductionist’s delight. It has no brain and exactly 302 neurons—wired up according to a rigid program. It seems like one of the least likely candidates for even the simplest form of learning, let alone developmental plasticity and social behavior. Its behavior consists of not much more than wriggling forward and wriggling backward. Yet if such a worm repeatedly finds food at a certain temperature, it registers this fact and thenceforth shows a preference for that temperature; if unrewarded at this temperature, it gradually loses its temperature preference. Such flexible learning is under the influence of a gene called NCS-1.

Not only can nematode worms learn; they can also develop different adult “personalities” according to their social experience during infancy. Cathy Rankin sent some worms to school (i.e., reared them together in a single Petri dish) and kept others at home (i.e., alone in a dish). She then tapped the side of the dish, causing the worms to reverse the direction of their movement. The social worms, which were used to running into each other, were much more sensitive to the tapping than the solitary worms.

Rankin had engineered certain genes inside the worm so that she could study exactly which synapses between which neurons were responsible for the difference between the social and the solitary worms. The differences showed up as weaker glutamate synapses between certain sensory neurons and “interneurons.” Intriguingly, she found that the very same synapses could be altered during learning. After 80 taps, worms of both kinds became habituated to the fact that they lived in a vibrating world and gradually lost their tendency to reverse direction: they had learned. Both learning and schooling exerted their effects at the same synapses, and they did so by altering the expression of the same genes.

To prove that the development of behavior in a humble worm is environmentally plastic in this way rather underlines the developmentalist’s challenge. If an organism with no brain and just 302 neurons can benefit from going to school, then how much greater will be the effect of such contingencies in human upbringing. It is abundantly clear that early social enrichment has long-lasting and irreversible effects on the behavior of mammals. In the 1950s Harry Harlow (of whom more in ) discovered accidentally that a female monkey reared in an empty cage with just a wire model of a mother for company and no peers to play with will grow up to be a neglectful mother herself. She treats her babies as if they were large fleas. She has been somehow imprinted with the impoverished experience of her childhood and passes it on.

Likewise, baby mice separated from their mothers, or handled by human beings, are permanently affected by the experience. Isolated offspring grow up to be anxious, aggressive, and slightly more vulnerable to drug addiction. A mouse that was licked a lot by its mother as a baby tends to lick its own pups a lot, and cross-fostering reveals that this is inherited nongenetically—an adopted mouse will behave more like its nursing mother than like its biological mother. There is little doubt that these effects are mediated through genes in the baby mouse.

A female mouse presented with pups will ignore them at first but will gradually become maternal toward them. The speed with which this response occurs varies greatly between mice, and again a mouse that was licked a lot as a baby will respond more quickly. The work of Michael Meaney suggests that the genes involved are those for oxytocin receptors, which are switched on more easily in the mice that were well licked as babies. Somehow, the early licking alters the sensitivity of these genes to estrogens. Quite how this works is not known, but it may involve the dopamine system of the brain, dopamine being a mimic of estrogen. The plot thickens, because early maternal neglect definitely changes the expression of genes involved in the development of the dopamine system, which apparently accounts for the fact that animals from a deprived background are more easily addicted to certain drugs—drugs reward the mind through the dopamine system.

Darlene Francis in Tom Insel’s lab took two strains of mice and swapped them before and after birth. Mice of the C57 strain, transplanted just after fertilization, were nurtured in the wombs of mice of either their own strain or the BALB strain and then reared either by BALB or C57 mothers. After all this cross-fostering, the mice were tested for their skills at various standard tests which all mice living in laboratories are habitually required to take. One test involves finding a hidden platform on which to stand in a milky swimming pool and then remembering where it is. Another test involves plucking up the courage to explore when dropped in the middle of an open space. A third test involves exploring a cross-shaped maze in which two of the arms are closed and two open. The inbred strains of mice consistently differ in their performance on these tests, implying that genes prescribe their behavior. BALB mice spend less time in the middle of the open field, spend more time in the closed arms of the cross, and recall faster where to find the hidden platform than C57 mice. In the cross-fostering experiment, the C57 mice cross-fostered to C57 mothers either before or after birth behaved just like normal C57 mice. But C57 mice cross-fostered to BALB mothers just after fertilization and then reared by BALB mothers behaved just like BALB mice. Like Meaney’s rats, the BALB mothers lick their pups less than the C57 mothers, and seem thereby to change the pups’ natures. But this effect of maternal behavior depends on growing up in a BALB womb. C57 pups from a C57 womb that are cross-fostered to a BALB mother after birth look just like other C57 mice and not at all like BALB mice. As Insel puts it, Mother Nature meets Mother Nurture.

These are stunning discoveries. They hint at enormous sensitivity in the development of the mammal brain to how its owner is treated in the womb and soon after birth, but they also suggest that these effects are mediated through the animal’s genes. It is a striking example of Lehrman’s point that development matters to the outcome in adulthood. Indeed, it goes further than Lehrman did in revealing how genes are at the mercy of the behavior of other animals in the environment, especially parents. As usual, it supports neither an extreme “nurture argument” (because it is a phenomenon made possible by the actions of genes) nor an extreme “nature argument” (because it shows how plastic the expression of genes can be). It reinforces my message that genes are servants of nurture as much as they are servants of nature. It is a beautiful example of how the GOD includes in the job description of some genes the following admonition: during development you should at all times be ready to absorb information from the environment outside your parent organism and adjust your activity accordingly.


“Hasn’t it ever occurred to you that an Epsilon embryo must have an Epsilon environment as well as Epsilon heredity?” So speaks the Director of Hatcheries and Conditioning in Aldous Huxley’s novel of 1932, Brave New World. He is showing students the Predestination and Decanting Rooms in the hatchery, where artificially inseminated human embryos are reared in different conditions to produce different castes of society: from brilliant alphas to factory-fodder epsilons.

Rarely has a book been more misrepresented than Brave New World. It is today almost automatically assumed to be a satire on extreme hereditarian science: an attack on nature. In fact it is all about nurture. In Huxley’s imagined future, human embryos, having been artificially inseminated and in some cases cloned (“Bokanovskified”), are then developed into members of the various castes by a careful regimen of nutrients, drugs, and rationed oxygen. This is followed, during childhood, by incessant hypnopedia (brainwashing during sleep) and neo-Pavlovian conditioning until each person emerges certain to enjoy the life to which he or she has been assigned. Those who work in the tropics are conditioned to heat; those who fly rocket planes are conditioned to motion.

The highly “pneumatic” heroine Lenina is predestined—by what was done to her in the hatchery and in school, not by her genes—to enjoy flying, dates with the assistant predestinator, casual sex, rounds of obstacle golf, and doses of the happiness drug, Soma. Her admirer, Marx, rebels against such conformity only because alcohol was mistakenly added to his blood-surrogate before birth. He takes Lenina to a Savage Reservation in New Mexico for a holiday; there they meet Linda, a white “Savage,” and her son, John, whom they bring back to London to confront John’s father, who turns out to be the director of hatcheries and conditioning himself. John, autodidactically educated by a volume of Shakespeare, longs to see the civilized world, but becomes rapidly disillusioned with it and retires to a lighthouse in Surrey, where he is tracked down by a filmmaker. Goaded by intrusive spectators, he hangs himself.

Although there are drugs to keep people happy, and hints of heredity, the details of Brave New World, and the features that make it such a horrific place to live, are the environmental influences exercised upon the development of the bodies and brains of the inhabitants. It is a nurture hell, not a nature hell.

Previous: 4. The Madness of Causes
Next: 6. Formative Years