Book: The Agile Gene

Previous: 6. Formative Years
Next: 8. Conundrums of Culture

“All men are similar, in soul as well as body. Each of us has a brain, spleen, heart and lungs of similar construction; and the so-called moral qualities are the same in all of us—the slight variations are of no importance…. Moral diseases are caused by the wrong sort of education, by all the rubbish people’s heads are stuffed with from childhood onwards, in short by the disordered state of society. Reform society and there will be no diseases…. At any rate, in a properly organized society it won’t matter a jot whether a man is stupid or clever, bad or good.”

“Yes, I see. They will have identical spleens.”

“Precisely, madame.”

Bazarov and Madame Odintsov, in Fathers and Sons, by Ivan Turgenev.

In 1893 Alfred Nobel, the Swedish inventor of dynamite, was beginning to feel his age. Over 60 and not in good health, he heard rumors that miraculous rejuvenation might be achieved with transfusions of blood from giraffes. When rich men are in this kind of mood, the astute scientist gets out the begging bowl. Nobel was duly persuaded to pay 10,000 rubles for a grand new physiology building for Russia’s Imperial Institute of Experimental Medicine outside Saint Petersburg. Nobel died anyway in 1896 and the laboratory never bought a giraffe, but it went from strength to strength. With a staff of over 100, and managed like a business, it was a sort of scientific factory. In charge was an ambitious and confident young man named Ivan Petrovich Pavlov.

Pavlov was a disciple of Ivan Mikhailovich Sechenov, who was so obsessed with reflexes that he believed thought was nothing but a reflex with the action missing. He was as dedicated to the cause of nurture as his contemporary Galton was to the cause of nature: he believed that “the real cause of every activity lies outside man” and that “999/1,000 of the contents of the mind depends on education in the broadest sense, and only 1/1,000 depends on individuality.”

Sechenov’s philosophy guided much of the torrent of experimental work that poured from Pavlov’s factory over the next three decades. The victims of these experiments were mostly dogs, or “dog technologies” as they were rather coldly called. At first Pavlov concentrated on the digestive glands of the dog; later he began to move into the brain. In 1903 at a conference in Madrid, he announced the results of his most famous experiment. It had begun, like so much great science, serendipitously. He was trying to study the dog’s salivation reflex in response to food and had diverted one of a dog’s salivary glands into a funnel so he could measure the production of saliva. The dog, however, would start salivating as soon as it heard the food being prepared, or even as soon as it was strapped into the apparatus—anticipating the food.

This “psychic reflex” was not what Pavlov was after, but he suddenly saw its significance and switched his attention to it. The dog was now led to expect food whenever it heard a bell or a metronome, and it soon began to salivate to the sound of the bell alone. Pavlov having diverted its salivary glands into a funnel, he could actually count the drops of saliva produced in response to each ring of the bell. Later he proved that a dog with no cerebral cortex could still reflexively salivate when fed, but not when alerted by the bell. The “conditioned reflex” to the bell therefore lay in the cortex itself.

Pavlov seemed to have discovered a mechanism—conditioning, or association—by which the brain could acquire knowledge of the regularities of the world. It was a great discovery, it was right, and of course it was not the whole answer. But as usual, some of Pavlov’s followers went too far. They began to assert that the brain was nothing but a device for learning through conditioning. This tradition flowered in the United States as behaviorism. Its champion was John Broadus Watson, of whom more later.

Modern learning theorists have modified Pavlov’s idea in one crucial way. They argue that the active learning occurs not when the stimulus and reward continue to appear together, but when there is some discrepancy between an expected coincidence and what actually happens. If the mind makes a “prediction error”—expecting a reward after a stimulus and not getting it, or vice versa—then the mind must change its expectation: it must learn. So, for example, if the bell no longer predicts the food, but a flash of light now does predict the food, the dog must learn from the discrepancy between its own expectations and the new reality. Surprise, pleasant or unpleasant, is more informative than predictability.

This new emphasis on prediction errors now takes physical form in the brain as well as psychological form in the mind. In a series of experiments on monkeys, Wolfram Schultz has discovered that dopamine-secreting neurons in a certain part of the brain (the substantia nigra and ventral tegmental area) react to surprise, but not to predicted effects. They fire more when the monkey is rewarded and less when it is unexpectedly deprived of a reward. The dopamine cells themselves, in other words, actually encode the same rule of learning theory that engineers now try to build into robots.

Pavlov, the indefatigable dissector of dogs, would have enjoyed such a reductionist result. But he might have been made uneasy by a philosophical irony this result leads to. He was out to prove that the dog’s brain learned about its situation from the world, that in Sechenov’s words “the real cause … lies outside man.” He stood in a long tradition of empiricism stretching back through Mill and Hume to Locke: human nature was largely the scribbling of experience on the blank sheet of the mind. Yet for the mind to scribble on its sheet, it must have dopamine neurons specially designed to respond to surprise. And how are they so designed? By genes. Today the precise equivalent of the experiment that Pavlov performed is being done, routinely, in many of the top genetics laboratories of the world, because Pavlov’s modern descendants are busy proving the role that genes play in learning. Here lies the proof of this book’s theme: genes are not only involved in nature; they are just as intimately involved in nurture.

The modern Pavlovian experiment is often done with fruit flies, but the principle is identical. A fly is given an electric shock through its feet shortly after a puff of smelly chemical is squirted into its test tube. Pretty soon the fly learns that the smell will be followed by the shock, so it takes to the air before the shock arrives: it has made the (initially surprising) association between the two phenomena. This experiment was first done by Chip Quinn and Seymour Benzer in the 1970s at the California Institute of Technology. It proved, to universal surprise, that flies can learn and remember associations between smells and shocks.

It also proved that they can only do so if they have certain genes. Mutants missing a crucial gene just don’t get the point. There are at least 17 genes that are essential to the laying down of a new memory in the fruit fly. These genes have pejorative names—dunce, amnesiac, cabbage, rutabaga, and so on—which is a bit unfair, since the fly is a dunce only if it lacks the gene, not if it has it. Recognizably the same set of so called CREB genes is used by all animals including human beings. The genes must be turned on—that is, they must create a protein—during the learning process itself.

This is an astonishing discovery, rarely appreciated for quite how shocking it is. Here is what John B. Watson said about associative learning in 1914:

Most of the psychologists talk quite volubly about the formation of new pathways in the brain, as though there were a group of tiny servants of Vulcan there who run through the nervous system with hammer and chisel digging new trenches and deepening old ones.

Watson was mocking the idea. But the joke is on him. The formation of a mental association takes the form of new and strengthened connections between neurons. The servants of Vulcan that create those connections exist. They are called genes. Genes—those implacable puppet masters of fate that are supposed to make the brain and leave it to get on with the job. But they do not; they also actually do the learning. Right now, somewhere in your head, a gene is switching on, so that a series of proteins can go to work altering the synapses between brain cells so that you will, perhaps, forever associate reading this paragraph with the smell of coffee seeping in from the kitchen…

I cannot emphasize the next sentence strongly enough. These genes are at the mercy of our behavior, not the other way around. The things that make Pavlov’s associations are made of the same stuff as the chromosomes that carry heredity. Memory is “in the genes” in the sense that it uses genes, not in the sense that you inherit memories. Nurture is affected by genes just as much as nature is.

Here follows one example of such a gene. In 2001, Josh Dubnau working with Tim Tully did an exquisite experiment on a fruit fly. Please wallow in the details of the methods for a few moments just to appreciate the sophistication of the tools available to modern molecular biology (and then pause to reflect just how much more sophisticated they will be in a few years’ time). First, he made a temperature-sensitive mutation in a particular fly gene, called shibire, the gene for a motor protein called dynamin. This means that at 30°C the fly is paralyzed, but at 20°C it recovers completely. Next Dubnau engineered a fly in which this mutant gene is active only in the output from one part of the fly’s brain, called the mushroom body, which is essential for learning to associate smells with shocks. This fly is not paralyzed at 30°C, but it cannot retrieve memories. When such a fly is trained, while hot, to pair a smell with danger, then asked, when cool, to retrieve the memory, it performs well. In the opposite circumstance, when the fly is asked to form the memory while cool and retrieve the memory while hot, it cannot.

Conclusion: the acquisition of a memory is distinct from its retrieval; different genes are needed in different parts of the brain. The output from the mushroom body is necessary for retrieval but not for acquisition of memory, and the switching on of a gene is necessary for that output. Pavlov may have dreamed that one day somebody would understand the wiring in the brain that explained associative learning, but he surely could not have imagined that somebody would go still deeper and describe the actual molecules, let alone find that the key to the process, minute by minute, lies in Gregor Mendel’s little particles of heredity.

This is a science in its infancy. Those who study the genes involved in learning and memory have struck a rich seam to mine. Tully, for instance, has now set himself the immense task of understanding how these genes of memory alter some of the synapses between their home neuron and its neighbor while leaving other synapses untouched. Each neuron has on average 70 synapses connecting it to other cells. Somehow, in the cell nucleus, the CREB gene on chromosome 1 has the job of switching on a set of other genes, and those other genes must then send their transcripts to just the right synapses where they can be used to change the strength of the connection. Tully has at last found a way to understand how that is done.

Yet CREB is only part of the story. Seth Grant has found evidence that many of the genes necessary for learning and memory are more than simply part of a sequential network; in effect they make up a machine, which he calls a Hebbosome (for reasons that will become clear later). One such Hebbosome consists of at least 75 different proteins—that is, the products of 75 genes—and appears to work as a single complex machine.


I promised to return to John B. Watson. Reared in poverty and isolation in rural South Carolina, Watson was the son of a devout mother and a philandering father who left home when Watson was 13. This background gave him—either through genes or experience—a strong and truculent character. He was a violent adolescent, a faithless husband, and a domineering father, who drove a son to suicide and a granddaughter to drink and eventually became a bitter recluse in retirement. He also caused a revolution in the study of human behavior. Frus-trated by the waffling that passed for psychology, in 1913 he outlined a bold manifesto for reform in a lecture entitled “Psychology as the Behaviorist Views It.”

Introspection, he announced, must cease. According to legend, Watson was disgusted to be asked to imagine what went on in the mind of a rat as it ran through a maze. He suffered from physics envy. The science of psychology must be put on an objective foundation. Behavior, not thought, was what counted. “The subject matter of human psychology is the behavior of the human being.” In other words, the psychologist should study what went into the organism and what came out, not the processes in between. The principles that governed learning could be derived from any animal and applied to people.

Watson drew his ideas from three main streams of thought. William James, though himself a nativist, had stressed the role of habit formation in human behavior. Edward Thorndike had gone further, coining his “law of effect” whereby animals repeated actions that produced pleasant results and did not repeat actions that had unpleasant consequences: an idea that also goes under other names: reinforcement learning, trial-and-error learning, instrumental conditioning, and operant conditioning (these psychologists love their jargon). In Thorndike’s experiments, a cat had found the lever to open the door to its cage by trial and error; within a few trials it knew exactly how to open the door. Though Pavlov’s work was not translated until 1927, Watson knew of it from his friend Robert Yerkes and saw immediately that Pavlovian or classical conditioning was a centerpiece of learning. At last, here was a psychologist as rigorous as the physicists: “I saw the enormous contribution Pavlov had made, and how easily the conditioned response could be looked upon as the unit of what we had all been calling HABIT.”

In 1920, Watson and his assistant Rosalie Rayner performed an experiment which convinced him that emotional reactions could be conditioned, and that human beings could be treated as large, hairless rats. It was an immensely influential experiment. A word about Rayner is relevant here. She was the 19-year-old niece of a prominent senator famous for conducting hearings into the sinking of the Titanic. She was beautiful and rich, and she drove around Baltimore in a Stutz Bearcat. Watson fell in love with her and she with him. Watson’s wife found a love letter from Rayner in his coat, but she was advised by a lawyer to see if she could find a letter from him, not to him, before confronting him. So she went around to the Rayners’ house for coffee; once there she feigned a headache and asked to lie down. Upstairs, she quickly locked herself in Rosalie’s bedroom and searched it, finding 14 love letters from her husband. The ensuing scandal cost Watson his academic career. He divorced his wife, married Rayner, and left psychology for an advertising career with J. Walter Thompson, where he devised a successful campaign for Johnson’s baby powder and persuaded the queen of Romania to endorse Pond’s face cream.

The subject of these lovebirds’ experiment in 1920 was a little child called Albert B, who had been reared from birth in a hospital. (It has been claimed that Albert was Watson’s illegitimate child by a nurse, but I can find no proof of this.) When Albert was eleven months of age, Watson and Rayner showed him a series of objects including a white rat. None of the objects frightened Albert; he enjoyed playing with the rat. But when they suddenly banged a hammer on a steel bar, Albert cried, not unreasonably. The two psychologists then began banging the bar whenever Albert touched the rat. Within a few days Albert was likely to start crying as soon as the rat appeared, a conditioned fear response. He was now frightened of a white rabbit, too, and even a sealskin coat, apparently having transferred his fear to any white, furry thing. With characteristic sarcasm, Watson announced the moral of the tale:

The Freudians, twenty years from now, unless their hypotheses change, when they come to analyze Albert’s fear of a sealskin coat—assuming he comes to analysis at that age—will probably tease from him the recital of a dream which upon their analysis will show that Albert at three years of age attempted to play with the pubic hair of the mother and was scolded violently for it.

By the mid-1920s Watson was convinced not that conditioning was a part of how humans learned about the world but that it was the main theme. He joined a growing academic trend toward enthusiasm for nurture over nature and made an extraordinary claim:

Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I’ll guarantee to take any one of them at random and train him to become any type of specialist I might select—doctor, lawyer, artist, merchant-chief, and yes, even beggar-man and thief, regardless of his talents, penchants, tendencies, abilities, vocations and race of his ancestors.


Ironically, five years before Watson’s claim a very powerful man had had the same thought: Vladimir Ilyich Lenin. Like Pavlov, Lenin was influenced by the environmentalism of Sechenov, which he learned of through the writings of Nikolai Chernyshevsky. Two years after the Russian revolution, Lenin is said to have paid a secret visit to Pavlov’s physiology factory and asked him if it was possible to engineer human nature. No record of the meeting survives, so Pavlov’s views on the matter are unknown. Perhaps he had more pressing concerns: with the famine induced by the civil war, the institute’s dogs were starving, and the researchers could keep them alive only by sharing their meager rations with them. Pavlov had begun to cultivate his own vegetable patch at the institute, leading by example and driving his students to feats of horticulture as energetically as he had driven them to feats of science. No hint of political encouragement to Lenin from Pavlov comes down to us. Pavlov was an outspoken critic of the revolution, though he mellowed when shown favor by the commissars.

Lenin could undoubtedly see that the success of communism rested on an assumption that human nature could be trained to a new system. “Man can be corrected,” he said. “Man can be made what we want him to be.” Echoed Trotsky: “To produce a new, ‘improved version’ of man—that is the future task of Communism.” Much Marxist debate revolved around the question of how long it would take to produce a “new man.” Such an aim makes no sense unless human nature is almost entirely malleable. In this sense, communism always had a vested interest in nurture rather than nature. But the state was slow to put this idea into practice. In the 1920s, even the Soviet Union was caught up in the global enthusiasm for eugenics. N. A. Semashko outlined an ambitious program of socialist eugenics in 1922, celebrating the appalling idea that eugenics “will place the interests of the whole society, of the collective, first, above the interests of the individual persons.” The “new man” was to be bred. But under Stalin, Soviet eugenics collapsed, as communist leaders realized that not only would this take several generations, but preserving the intelligentsia by selective breeding rather contradicted the general secretary’s increasingly obvious preference for persecuting intellectuals. After the Nazis came to power in Germany, there was another reason to reject eugenics: the study of human heredity was equated with the rival creed of fascism. Russian eugenicists were soon criticized for their hereditarian beliefs—for not “grasping the social levers.”

The person who would grasp the social levers came from an unexpected direction. In the 1920s, with Russia in the grip of famine, the government discovered Ivan Vladimirovich Michurin, an elderly and paranoid crank who bred apples near Kozlov. Michurin made absurd claims—that he could make a pear sweeter to the second generation by watering it with sugar water, or that grafting produced a hybrid stock. He suddenly found himself showered with honors and grants by a government desperate for quick ways of boosting food production. Michurinism was promoted as a new science to replace Mendelism.

The scene was set for a scientific coup. A young man called Trofim Denisovich Lysenko managed to catch the attention of Pravda because he was apparently able to breed a better crop of wheat by Michurinist means. At the time, winter-sown wheat was killed by winter frost except in the far south of the country, while spring-sown wheat sometimes came into ear too late and was killed by drought. Lysenko at first claimed to have bred hardy winter wheat by “training” it. By 1928–1929, seven million hectares of wheat were planted with his technique: it all died. Unfazed, Lysenko switched to spring wheat, claiming that simple soaking—vernalization—would make it quick to ear. Again this merely exacerbated the famine. By 1933 vernalization had been dropped.

But Lysenko, who was better at politics than science, went from strength to strength and was soon touting his ideas as a new form of science that disproved the theory of the gene and demolished the tenets of Darwinism. Mutual aid, not competition, was the key to evolution, he said. Genes were a metaphysical fiction; reductionism was a mistake. “There is in an organism no special substance apart from the ordinary body…. We deny little pieces, corpuscles of heredity.” (After 1961 Russian scientists were allowed to study DNA, but Lysenko, in his confused way, argued that the double helix was a foolish notion: “It deals with the doubling, but not the division of a single thing into its opposites, that is, with repetition, with increase, but not with development.”) Lysenkoism was an organic, “holistic” science and a “hymn to the natural union of men with their living environment.” Its adherents remained disdainful of demands for data to prove its claims, preferring bucolic folk wisdom.

Throughout the 1930s, Lysenko’s followers fought an increasingly bitter battle within Soviet biology for supremacy over the geneticists. Gradually they gained the upper hand, and in 1948 Lysenko at last won full support from the state. Genetics was suppressed; geneticists were arrested, and many died. The death of Stalin in 1953 made no difference, Khrushchev being an old friend and supporter of Lysenko. Yet it was increasingly obvious to Russian scientists—though not to many foreign biologists, who continued to apologize for Lysenko—that the man was a nut. Literally: he claimed to have created a hornbeam tree that bore hazelnuts. (He also claimed to have developed a wheat plant that grew rye seeds, and to have seen cuckoos hatching from warblers’ eggs.)

Lysenko fell with Khrushchev in 1964. Indeed, he was part of the reason Khrushchev fell. Lysenkoism was on the agenda of the meeting of the Central Committee that deposed Khrushchev, and the stagnation of agricultural yields since 1958 was the main charge against the party leader. Lysenko was disgraced, but the criticism was muted for many years. His science vanished without a trace.


This agricultural story may seem to have little to do with human nature. After all, as David Joravsky, a historian of Lysenkoism, has put it, “any resemblance to genuinely scientific thought was purely accidental.” Yet it provides the background against which all Soviet biology operated. The extreme nurturism that began long before the revolution with Sechenov and reached its apogee under Lysenko set the tone for much of the century in Russia. And, consciously or not, it was echoed throughout the West. The insights of Pavlov and Watson into how learning occurred were somehow taken by many as proof that nothing but learning occurred in people. Marxism explicitly endorsed human exceptionalism, arguing that human history had switched from biology to culture at a specific moment. (“Man, thanks to his mind, ceased long ago to be an animal,” said Lysenko.) Marx was also credited with transcending the antinomy between “is” and “ought”—the famous naturalistic fallacy of David Hume and G. E. Moore. By the late 1940s the related notions that human beings were products of nurture and culture, in sharp contrast to animals, and that this was a moral as well as a scientific necessity, were widespread throughout the West as well as the socialist world.

“If genetic determinism is true,” wrote Stephen Jay Gould, “we will learn to live with it as well. But I reiterate my statement that no evidence exists to support it, that the crude versions of past centuries have been conclusively disproved, and that its continued popularity is a function of social prejudice among those who benefit most from the status quo.” This reasoning led to trouble. As biologists from Ernst Mayr to Steven Pinker have argued, it is not just mistaken to base policy and morality on an assumption of malleable human nature—it is dangerous. As soon as biologists began to discover that there was a degree of innate, genetic causation behind behavior, then another argument would have to be invented for morality. Said Pinker:

Once [social scientists] staked themselves to the lazy argument that racism, sexism, war and political inequality were logically unsound or factually incorrect because there is no such thing as human nature (as opposed to morally despicable, regardless of the details of human nature), every discovery about human nature was, by their own reasoning, tantamount to saying that racism, sexism, war and political inequality were not so bad after all.

I shall repeat myself in order to be absolutely clear. There is nothing factually wrong with arguing that human beings are capable of learning, or being conditioned to associate stimuli, or reacting to reward and punishment or any other aspect of learning theory. These are facts and vital bricks in the wall I am building. But it does not follow that therefore human beings have no instincts, any more than it would follow that human beings are incapable of learning if they have instincts. Both can be true. The error is to be an either-or person, to indulge in what the philosopher Mary Midgely calls “nothing buttery.”

The high priest of nothing buttery was Burrhus Frederic Skinner, a follower of Watson, who took behaviorism to new heights of dogmatism. The organism, said Skinner, was a black box that need not be opened: it merely processed signals from the environment into an appropriate response, adding nothing from its innate knowledge. Skinner, even more than Watson, defined psychology by what was not true about human nature: that people did not have instincts. Even when, late in his life, he admitted that human behavior had an innate component, he equated it with destiny—innate features “cannot be manipulated after the individual is conceived”—once again proving my point that the critics of innateness have a much more determinist model of genes in mind than its supporters. The nurturists were more fatalist about genes than the naturists.

I struggle to stay positive when reading Skinner. His experiments on operant conditioning were undoubtedly brilliant; his invention of the Skinner box, in which a pigeon could be rewarded or punished according to an experimental schedule, was a technological marvel; his intellectual honesty was undoubted. Unlike some behaviorists, he did not pretend that environmentalism is not determinism. In my own life I frequently obey his tenets. I behave like a pigeon in a Skinner box when I go fly fishing: it was Skinnerians who discovered that an unpredictable random reward schedule is exceptionally effective in keeping the pigeon pecking at the symbol or the fisherman casting into the current. I behave like a Skinner box itself whenever I try to condition my children’s table manners using reward and punishment.

Yet I cannot admire a man who regularly confined his own daughter Debby to a sort of Skinner box for the first two years of life. The “air crib” was a soundproof box with a window, supplied with filtered, humidified air, from which the little girl emerged only for scheduled playtimes and meals. Skinner also published a book attacking freedom and dignity as outmoded concepts. In 1948, the same year as George Orwell’s 1984 appeared, he published a fictional account of utopia that sounds almost as bad as Orwell’s hell. More of that later. My purpose here is to chart the decline and fall of Skinnerism, because it opened a new and fascinating chapter in the history of learning. It all began with a baby monkey in Wisconsin.

Harry Harlow was a jovial midwestern psychologist addicted to puns and rhymes who chafed against the confines of his training in behaviorism. His original name was Harry Israel. He trained at Stanford under the dominating psychologist Lewis Terman (who insisted that Harry change his name to Harlow because it sounded less Jewish and therefore improved his chances of getting a job). He never quite bought the idea that only reward and punishment determined the mind. Unable to build a rat laboratory, he instead began rearing baby monkeys in a homemade laboratory when he moved to the University of Wisconsin at Madison in 1930. But soon he noticed that his baby monkeys, taken from their parents to be reared in perfect cleanliness and disease-free isolation, were growing up to be fearful, anti-social, patently unhappy adults. They clung to cloths as if to rafts on the sea of life. One day in the late 1950s Harlow was on an airplane from Detroit to Madison when he looked down at the fluffy white clouds over Lake Michigan and was reminded of his baby monkeys clinging to their cloths. An idea for an experiment occurred to him. Why not offer a baby monkey the choice between a cloth model of its mother that did not reward it and a wire model of a mother that did reward it with milk? Which would it choose?

Harlow’s students and colleagues were appalled by the idea. It was too fluffy a hypothesis for the hard science of behavior. Eventually Robert Zimmerman was persuaded to do the experiment by the promise of being able to keep the baby monkeys for some more useful work later. Eight baby monkeys were placed in separate cages supplied with both wire model mothers and cloth model mothers—both were later equipped with lifelike wooden heads, mainly to please human observers. In four of the cages, the cloth mother contained a bottle of milk and a teat to drink from. In the other four, the milk came from the wire mothers. If these four baby monkeys had read Watson or Skinner they should quickly have come to associate the wire model with food and come to love wire. Their wire mothers rewarded them generously, whereas their cloth mothers ignored them. But the baby monkeys spent nearly all their time on the cloth mothers; they would leave the security of the cloth only to drink from the wire mothers. In a famous photograph, a baby monkey clings with its rear legs to the cloth mother and leans across to get milk from a wire mother.

Many similar experiments followed—rocking mothers were preferred to still ones, warm mothers to chilled ones—and Harlow announced the results in his presidential address to the American Psychological Association in 1958, entitling his talk provocatively “The Nature of Love.” He had dealt a fatal blow to Skinnerism, which had talked itself into the absurd position that the entire basis of an infant’s love for its mother was that the mother was the source of its nourishment. There was more to love than reward and punishment; there was something innate and self-rewarding about an infant’s preference for a soft, warm mother. “Man cannot live by milk alone,” quipped Harlow. “Love is an emotion that does not need to be bottle-or spoon-fed.”

There was a limit to the power of association, a limit supplied by innate preferences. These results seem almost absurdly obvious now, and to anybody who had read Tinbergen’s work on the triggers of behavior in gulls and sticklebacks they were obvious even then. But psychologists did not follow ethology, and such was the grip of behaviorism on psychology that Harlow’s talk was genuinely surprising to many people. A crack had appeared in the edifice of behaviorism, a crack that would widen steadily.

Laboriously, throughout the 1960s, psychologists rediscovered the commonsense notion that people, and animals, are so equipped that they find some things easier to learn than others. Pigeons are rather good at pecking at symbols in Skinner boxes. Rats are good at running through mazes. By the late 1960s, Martin Seligman had developed the vital concept of “prepared learning.” This was almost the exact opposite of imprinting. In imprinting, a gosling becomes fixated on the first moving thing it encounters—mother goose or professor. The learning is automatic and irreversible, but it can attach to a wide variety of targets. In prepared learning, the animal can learn to fear a snake very easily, for instance, but finds it hard to learn to fear a flower: the learning attaches only to a narrow range of targets, and without those targets it will not happen.

This fact was demonstrated by another group of monkeys at Wisconsin a generation after Harlow. Susan Mineka was a student of Seligman, and after she moved to Wisconsin, in 1980 she designed an experiment to test the idea of prepared learning. She keeps the original videos of that experiment in a cardboard box in her office to this day. The clue that she followed up was the fact, known since 1964, that monkeys reared in the laboratory show no fear of snakes, whereas all wild-reared monkeys are scared witless by them. Yet it cannot be that every wild-reared monkey has had a bad Pavlovian experience with a snake, for the danger from snakes is usually lethal; you do not get much chance to learn by conditioning that snakebites are venomous. Mineka hypothesized that monkeys must acquire a fear of snakes vicariously, by observing the reactions of other monkeys to snakes. Lab-reared monkeys, not getting this experience, do not acquire the fear.

She first took six baby monkeys born in captivity to wild-born mothers and exposed them to snakes while they were alone. They were not especially afraid. When given the opportunity to reach over a snake to get some food, the hungry monkeys were quick to do so. Then she showed them snakes while their mother was present. The mother’s terrified reaction—climbing to the top of the cage, smacking its lips, flapping its ears, and grimacing—was immediately picked up by the offspring, which thereafter was permanently frightened even of a plastic model of a snake. (From now on, Mineka used toy snakes, which were easier to control.)

Next she showed that this lesson was just as easily learned from a strange monkey as from a parent, and then that it was easily passed on: a monkey could acquire a fear of snakes from a monkey that had acquired its own fear in this way. Next, Mineka wanted to see if it was equally easy to get one monkey to teach a naive monkey to fear something else, such as a flower. The problem was how to get the first monkey to react with fear to a flower. Mineka’s colleague, Chuck Snowdon, suggested that she use a newly invented technology, videotape. If monkeys could watch videotapes and learn from them, then the videos could be doctored to make it appear that the “teaching” monkey was afraid of a flower, when it was in fact reacting to a snake.

It worked. Monkeys had no difficulty watching videotapes of monkeys and reacting as they did to real monkeys. So Mineka prepared tapes in which the bottom half of the screen was spliced in from another scene. This made it appear either that a monkey was calmly reaching over a model of a snake to get at some food, or that a monkey was reacting with terror to a flower. Mineka showed the doctored tapes to naive lab-reared monkeys. In response to the “true” tape (fear in response to a snake, nonchalance in response to a flower), monkeys quickly and robustly drew the conclusion that snakes are frightening. In response to the “false” tapes (fear in response to a flower, nonchalance in response to a snake), monkeys merely drew the conclusion that some monkeys are crazy. They acquired no fear of flowers.

This was, in my view, one of the great moments in experimental psychology, alongside Harlow’s wire mother. It has been repeated in all sorts of different ways, but the same conclusion always emerges clearly: monkeys very easily learn to fear snakes; they do not easily learn to fear most other objects. It shows that there is a degree of instinct in learning, just as imprinting shows that there is a degree of learning in instinct. Mineka’s experiment has been much examined by blank-slate zealots desperate to find flaws in it, but so far it has resisted debunking.

Monkeys are not people, yet it is undoubtedly true that people are often afraid of snakes. Snake-fear is one of the commonest forms of phobia. Coincidentally, many people report that they developed their fear through a vicarious experience, such as seeing a parent react with fear to a snake. People are also commonly afraid of spiders, the dark, heights, deep water, small spaces, and thunder. All of these were a threat to Stone Age people, whereas the much greater threats of modern life—cars, skis, guns, electric sockets—simply do not induce such phobias. It defies common sense not to see the handiwork of evolution here: the human brain is prewired to learn fears that were of relevance in the Stone Age. And the only way that evolution can transmit such information from the past to the design of the mind in the present is via the genes. That is what genes are: parts of an information system that collects facts about the world in the past and incorporates them into good design for the future through natural selection.

Of course, I cannot prove the last few sentences. I can produce plenty of evidence that fear conditioning, in human beings as in other mammals, depends heavily on the amygdala, a small structure near the base of the brain. I can even pass on a few hints about which servants of Vulcan are digging the trenches to and from the amygdala and how (it looks like the facilitation of glutamate synapses). I can tell you about twin studies showing that phobias are heritable, which implies genes at work. But I cannot be sure that all this is designed according to a plan laid out in a genetic instruction for wiring the brain that way. I just cannot think of a better explanation. Fear learning looks like a clear-cut module, a blade on the mind’s Swiss army knife. It is nearly automatic, encapsulated, selective, and operated by selective neural circuitry.

It still has to be learned. And you can also learn to fear cars, dentists’ drills, or sealskin coats. Clearly Pavlovian conditioning can create a fear of any kind. But it can undoubtedly establish a stronger, quicker, and longer-lasting fear for snakes than for cars, and so can social learning. In one experiment, human subjects were conditioned to fear snakes, spiders, electrical outlets, or geometric shapes. The fear of snakes and spiders lasted much longer than the other fears. In another experiment, the subjects were conditioned (by loud bangs) to fear either snakes or guns. Again, the fear of snakes lasted longer than that of guns—even though snakes do not go bang.

That a fear may be easily learned does not mean it cannot be prevented or reversed. Monkeys that have watched videos of other monkeys nonchalantly ignoring snakes become resistant to learning a fear of snakes even if later exposed to a video of an alarmed monkey. Children with pet snakes can apparently “immunize” their friends against learning a fear of snakes. So this is not, Mineka stresses, a closed instinct. It is still an example of learning. But learning requires not just genes to set the system up for learning but genes to operate it as well.

The most exciting thing about this story is the way it brings together each of the themes I have explored in this book so far. Superficially, a fear of snakes looks exactly like an instinct. It is modular, automatic, and adaptive. It is highly heritable—twin studies show that phobias, like personality, owe nothing to shared family environment but a great deal to shared genes. And yet—Mineka’s experiments show it is entirely learned. Was there ever a clearer case of nature via nurture? Learning is itself an instinct.


Hard-line behaviorists are rare birds these days. Few remain who have not been persuaded by the cognitive revolution, and by experiments like Mineka’s, to believe that the human mind learns what it is good at learning, and that learning requires more than a general-purpose brain; it requires special devices, each content-sensitive and each expert at extracting regularities from the environment. The discoveries of Pavlov, Thorndike, Watson, and Skinner are valuable clues to how these devices go about their work, but they are not the opposite of innate: they depend on innate architecture.

There does remain a group of scientists who still object to injecting too much nativism into learning theory. They are called connectionists. As usual, what they actually say about how the brain works is barely distinguishable from what most nativists claim. But, also as usual, in arguments over nature versus nurture the two sides like to paint each other into a corner, and feelings run high. The only difference I can see between the two is that the connectionists stress the openness of brain circuits to new skills and experiences while nativists stress their specificity. If you will forgive a bit of hack Latin, connectionists see the tabula as half rasa; nativists see it as half scripta.

Connectionism is not really about real brains at all. It is about building computer networks that can learn. It gets its inspiration from two simple ideas: “hebbian correlation” and “error back-propagation.” The first term refers to a Canadian, Donald Hebb, who made a throw away remark in 1949 that placed him firmly into the history books:

When an axon of cell A is near enough to excite cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency as one of the cells firing B, is increased.

What Hebb is saying is that learning consists of strengthening connections that are frequently in use. The servants of Vulcan dig out the channels that are used, making them flow better. Ironically, Hebb was no behaviorist—indeed, he was a fervent enemy of Skinner’s idea that the black box must remain closed. He wanted to know what changed inside the brain, and guessed correctly that it was the strength of the synapse. The phenomenon of memory, at the molecular level, seems to be precisely hebbian.

A few years after Hebb’s insight, Frank Rosenblatt built a computer program called a perceptron, which consisted of two layers of “nodes” or switches, the connections between which could be varied. Its job was to vary the strengths of the connections until its output had the “correct” pattern. The perceptron achieved little; but 30 years later, a third, “hidden,” layer of nodes was added between the output and the input layers, the connectionist network began to take on the properties of a primitive learning machine, especially after being taught “error back-propagation.” This means adjusting the strengths of the connections between the units in the hidden layer and the output layer where the output was in error, and then adjusting the strengths in the previous connections—propagating the error-correction back up the machine. It is broadly the same point about learning from prediction errors that modern Pavlovians make and that Wolfram Schultz found manifest in the human dopamine system.

Connectionist networks, suitably designed, are capable of learning regularities of the world in a manner that looks a bit like the way the brain works. For instance, they can be used to categorize words into noun-verb, animate-inanimate, animal-human and so on. If damaged, or “lesioned,” they seem to make mistakes similar to those made by people who have had strokes. Some connectionists feel that they have taken the first steps toward re-creating the basic workings of the brain.

Connectionists deny that they believe in nothing but association. They do not, like Pavlov, claim that learning is a form of reflex; nor do they claim, like Skinner, that a brain can be conditioned to learn anything with equal ease. Their hidden units play the innate role that Skinner was unwilling to grant the brain. But they do claim that, with a minimum of prespecified content, a general network can learn a wide variety of rules about how the world works.

In that sense they are in the empiricist tradition. They dislike excessive nativism, deplore the emphasis on massive modularity, and are disgusted by cheap talk of genes for behavior. Like David Hume, they believe that the knowledge the mind has derives largely from experience.

“That’s what’s so nice about empiricist cognitive science: you can drop out for a couple of centuries and not miss a thing,” says the philosopher Jerry Fodor. Although Fodor has become a trenchant critic of taking nativism too far, he has no time for the connectionist alternative. It is “simply hopeless,” because it can neither explain what form logical circuits must take nor explain the problem of abductive—“global”—inference.

Steven Pinker’s objection is more specific. He says that the achievements of connectionists are in direct proportion to the extent to which they pre-equip their networks with knowledge. Only by prespecifying the connections can you make a network learn anything useful. Pinker compares connectionists to the man who claimed to be able to make “stone soup”—the more vegetables he added, the better it tasted. In Pinker’s view, the recent successes of connectionism are a backhanded compliment to nativism.

In response, connectionists say they are not denying that genes may set the stage for learning; they are saying only that there may be general rules about how networks of synapses change to manifest this learning, and that similar networks may operate in different parts of the brain. They make much of recent discoveries of neural plasticity. In deaf people, or amputees, disused parts of the brain are reallocated to different functions, implying that these parts are multipurpose. Speech, normally a left hemisphere function, is in the right hemisphere in some people. Violinists have a larger than usual somatosensory cortex for the left hand.

Far be it from me to referee such arguments. I would make only my usual judgment: something can be partly true without being the complete answer. I believe that there will be discovered networks in the brain which use their general properties as devices to learn about regularities in the world, that they use principles similar to connectionist networks and that similar networks may turn up in different mental systems so that learning to recognize a face uses a neuronal architecture similar to learning to fear a snake. Discovering those networks and describing their similarities will be fascinating work. But I also believe that there will be differences between networks that do different jobs, differences that encode preknowledge in the form of evolved design to a greater or lesser extent. Empiricists stress similarity; nativists stress difference.

Modern connectionists, like other empiricists before them—Hebb, Skinner, Watson, Thorndike, and Pavlov, not to mention Mill, Hume, and Locke—have undoubtedly added a brick to the wall. They are wrong only when they try to pull somebody else’s bricks out, or to claim that the wall is held up only by empiricist bricks.


This brings me back to Skinner. You will recall that he wrote a utopia. It describes as ghastly a place as Huxley’s Brave New World or Galton’s Kantsaywhere, and for the same reason: it is unbalanced. A world of pure empiricism untempered by genetics would be as terrible as a world of pure eugenics untempered by environment.

Skinner’s book Walden Two, is about a commune that is a suffocating cliché of fascism. Young men and women stroll through the corridors and gardens of the commune smiling and helping each other like people in a Nazi or Soviet propaganda film; coerced conformity is all around. No dystopian cloud mars the sky, and the hero, Frazier, is all the more creepy for the fact that his creator plainly admires him.

The novel is told through the eyes of a professor, Burris. He is taken by two former students to see an old colleague, Frazier, who has founded a community called Walden Two. Burris, accompanied by the students and their girlfriends plus a cynic called Castle, spends a week at Walden Two, admiring Frazier’s apparently happy society based entirely on scientific control of human behavior. Castle leaves, scoffing; Burris follows at first but then returns, drawn back by the magnetism of Frazier’s vision:

Our friend Castle is worried about the conflict between long-range dictatorship and freedom. Doesn’t he know he’s merely raising the old question of predestination and free will? All that happens is contained in an original plan, yet at every stage the individual seems to be making choices and determining the outcome. The same is true of Walden Two. Our members are practically always doing what they want to do—what they “choose” to do—but we see to it that they will want to do precisely the things which are best for themselves and the community. Their behavior is determined, yet they are free.

I’m on Castle’s side. But at least Skinner is honest. He sees human nature as entirely caused by outside influences, in a sort of Newtonian world of linear environmental determinism. If behaviorists were right, then the world would be like that: a person’s nature would simply be the sum of external influences upon him or her. A technology of behavior control would be possible. In a preface added to the second edition in 1976, Skinner shows that he had few second thoughts, though like Lorenz he almost inevitably tries to tie Walden Two to the environmental movement.

According to Skinner, only by dismantling cities and economies, and replacing them with behaviorist communes, can we survive pollution, the exhaustion of resources, and environmental catastrophe: “Something like Walden Two would not be a bad start.” The truly scary thing is that Skinner’s vision attracted followers who actually built a commune and tried to run it along Frazier’s lines. It still exists: it is called Walden Dos, and it is near Los Horcones in Mexico.

Previous: 6. Formative Years
Next: 8. Conundrums of Culture