Scientists have repeatedly failed to recognize the complexity of animal cognition. Will we make the same mistakes with AI?
Animals think about a lot more than we once gave them credit for. It’s now common to read about chimpanzees that play elaborate social games, scrub jays that hide — even camouflage — food from rivals, or bees that can learn abstract rules. But as recently as the middle of the 20th century, attributing mental states to animals was taboo in science. Behaviorists studied simple and controlled behaviors — press a lever, receive a food pellet — while the naturalists who did observe sophisticated animal behaviors in the wild tended to describe them in terms of innate instincts or adaptations to ecological niches. Neither group sought to explain animal behavior in terms of mental states like beliefs, theories, or intentions.
In the decades since, we have been surprised to uncover complex cognition across the animal kingdom: first in our closest primate relatives, then in more distant creatures like crows and parrots, and most recently in invertebrates like the octopus and the honey bee. The progression from an overly cautious denial of complex mentality — driven by a desire for rigor and a fear of anthropomorphism — to a more sophisticated understanding of animal minds is one of the great stories of 20th century science. And it holds lessons for how humanity can approach the most critical intelligence explosion since the Paleolithic — that of artificial intelligence.
In the late 19th century, psychology relied excessively on the introspective theorizing of scientists. Lacking empirical rigor, the field came close to stagnating in a morass of ill-defined and irresolvable disputes. As Darwin’s theory of evolution gained acceptance, scientists became more interested in studying the continuities between human and animal minds, but this interest led to a methodology characterized by unchecked anthropomorphism.
In his 1882 book Animal Intelligence, George Romanes, an academic friend of Darwin’s, described scorpions that attempted suicide and foxes that sought revenge after failed hunting expeditions.
1
One of the most famous examples of runaway anthropomorphism was a stomping horse. Clever Hans, an Orlov trotter, wowed adoring crowds with his ability to add, subtract, and even tell time, indicating his answers by tapping his hoof. But an investigation showed that, unbeknownst to his owner, who by all accounts believed in the horse’s abilities, Hans only arrived at a correct answer by reading the facial expressions and body language of whoever asked the question.
Something stringent was needed to reign in such credulity. Behaviorism, which held that both human and animal behavior could and should be explained without reference to thoughts or feelings, offered a solution. John Watson’s 1913 article “Psychology as the Behaviorist Views It,” called on scientists to stop studying any behavior that could not be outwardly observed and measured — including the mind: “The time seems to have come when psychology must discard all reference to consciousness; when it need no longer delude itself into thinking that it is making mental states the object of observation.”
2
The work of psychologist B.F. Skinner, considered “the father of behaviorism,” is emblematic of how empirical rigor went hand in hand with distorted thinking about animal cognition. Skinner’s 1938 book, The Behavior of Organisms, describes dozens of well-controlled experiments on rats, conducted in precisely constructed operant conditioning chambers called Skinner boxes. That pressing levers for food was the main behavior tested, and rats the primary animal tested, was not, for Skinner, a limitation.“ The only differences I expect to see revealed between the behavior of rat and man (aside from enormous differences of complexity) lie in the field of verbal behavior,” he wrote.
At least in the West, even scientists who studied animal behavior in the wild shied away from attributing too much mental sophistication to their subjects. They were wary of getting ‘too close’ to animals — Western naturalists even considered it bad practice to give names to primates being studied.
3
It was against this background, that Jane Goodall arrived in a forest in Gombe, Tanzania and helped launch the first significant re-expansion of animal cognition.
Goodall first observed chimpanzees using sticks to extract termites from their mounds in 1960. Although Darwin and his contemporaries had readily accepted tool use among apes, the idea had fallen out of favor. So central was the belief that tool use was exclusive to humans that Goodall’s mentor, the Kenyan-British anthropologist Louis Leakey, told her that if her finding held, “we should by definition have to accept the chimpanzee as Man.”
4
Goodall was initially met with skepticism. Many criticized what they described as her sentimentality. Today, not only is tool use among chimps widely accepted, but has been described in many other hominids, as well as in elephants, dolphins, crabs, and birds.
As primatologist Frans de Waal recounts in his history of animal cognition, Are We Smart Enough to Know How Smart Animals Are?, the 1970s and 1980s saw a boom in both the observation of wild behaviors and the development of more sophisticated laboratory techniques to investigate them. As a result, scientists have identified sophisticated behaviors that the behaviorists would not have predicted — or even been able to observe. More importantly, they have been able to pose and test cognition-based theories to explain those behaviors in terms of what animals think and feel.
The behaviorist prohibition on discussing mental states is now regarded as overly restrictive, if not wrong altogether. De Waal argues, for example, that the complicated “political” jockeying among apes is best explained by their possessing a “theory of mind,” or the ability to model the beliefs and intentions of other agents. Even though their theory of mind may be different from and more “limited” than that of humans, it is now consensus that primates do share this basic cognitive capacity, and many others, with humans.
5
Anthropomorphism is not always an error, especially with creatures that are in fact very related to humans.
While Goodall was studying chimpanzees in Tanzania, other scientists were discovering unexpected cognitive capabilities in birds. A report from 1960 documents just one species capable of tool use — the woodpecker finch of the Galapagos Islands. Research in the 1970s and 80s added more species to the list — mostly in the corvid family, a clever group that includes ravens, jackdaws, and crows. Crows were once dubbed “feathered apes” after they were found to use sticks as tools and to engage in sophisticated problem-solving. Most famously, Irene Pepperberg's thirty-year experiment with an African gray parrot named Alex uncovered unimagined cognitive abilities. Not only was Alex capable of identifying colors, shapes, and quantities, but he also demonstrated an understanding of more abstract concepts such as same/different and bigger/smaller.
In recent decades, the circle of cognition has expanded to creatures even more distantly removed from humans. Octopuses were perhaps the animal cognition celebrities of the 2010s, with their sophisticated distributed nervous systems, behaviors suggestive of play, problem-solving abilities, idiosyncratic “personalities,” and their awareness of other agents.
Equally as startling is the sophistication of the honeybee. Insects were long thought to be “robotic,” driven purely by instinct. Jean-Henri Fabre, who studied wasps, bees, and many other insects from the 1860s until his death in 1915, commented on their “machine like obstinacy.” In the mid-1940s, entomologist and eminent ethologist Karl von Frisch discovered that bees communicate through a “waggle dance,” an elaborate choreography capable of describing the direction and distance to flowers, water sources, or new nest sites. In this century, bees have displayed the ability to learn rules that involve abstract, multimodal representations of sameness and difference.
With more research, scientists have successfully found more complex cognition than expected in animals further and further from humans. Why does the circle keep widening?
As a reaction to the field’s early excesses and credulity, behaviorism demanded strictly controlled experiments, limited to single behaviors like lever-pressing and simple stimuli such as flashing lights. The behaviorist’s error was to think that these artificially simple cases could be extended to explain all behaviors in all organisms. Their tools made it difficult to notice more complicated behaviors, and even more difficult to explain them once discovered.
One of the most forceful arguments against the behaviorists came in a review of Skinner’s book Verbal Behavior, which sought to explain language as a behavioral phenomenon like any other — a promise Skinner had made in The Behavior of Organisms. The review, which appeared in 1959 in the journal Language, argued scathingly that Skinner underestimated the depth of human language, which could not be explained simply by extending the methods of stimulus, response, and reward he had used to study rats.
It is now seen as a turning point, a milestone in the “cognitive revolution” in which the sciences of the mind turned away from behaviorism and looked instead to mental representations and operations. It also greatly raised the profile of the young linguist who had written it, Noam Chomsky. Chomsky understood that to accurately understand human and animal behavior, science needed methods that could accommodate behavioral complexity. “It is clear,” he wrote, “that what is necessary in such a case is research, not dogmatic and perfectly arbitrary claims, based on analogies to that small part of the experimental literature in which one happens to be interested.” And once complex cognitive abilities could be admitted as a hypothesis, methods could be developed to study them.
As researchers learned to treat animals with empathy and imagination, they discovered more and more capabilities. Breakthroughs emerged when scientists were able to imagine the world as experienced by each particular animal. Tool use was once thought to be conspicuously lacking among gibbons, small apes native to Southeast Asia. When tools that could be used to get food were placed in front of them on the ground, the gibbons did not grab them. The problem was not with gibbon intelligence, but human imagination. Gibbons live in trees. Their hands are well suited to swinging, but poorly adapted for picking things off the ground. When the tools were instead dangled from a branch, the gibbons had no problems and readily used them. Elephants initially failed the mirror test, a common method for determining self-recognition, because the mirrors used were too small. And in a true lack of empathy, many behaviorists assumed that to motivate their test subjects they had to keep them half starved. It’s now clear that animals that are treated well and feel cared for will, as with humans, be far more likely to act in interesting ways.
Wild observations are also a way of meeting animals where they are (literally) to see what they are capable of. Scientists now spend hundreds of hours in the field simply observing (grad students spend even more). Animals will often behave very differently among their own kind and in their natural habitat than they will in a sterile lab surrounded by lab-coated hairless primates. More wild observation has uncovered more sophisticated behaviors than lab scientists had imagined animals capable of.
And we have also learned that brains can operate in ways very different from our own. Bird intelligence was surprising to ornithologists because birds have no neocortex. Bee intelligence was surprising because they have very little brain at all. (Despite being the first to decode the waggle-dance, Karl von Frisch once said, “The brain of a bee is the size of a grass seed and is not made for thinking.”) In each case, nature has more ways of implementing cognition than we had thought to look for. Birds have alternative brain regions that perform the same function as the cortex. Bees have very densely packed neurons that fit quite a lot of cognition into something the size of a grass seed. Most strangely of all, the octopus has a cluster of neurons in each of its tentacles, resulting in a kind of thinking that is so distributed that it is hard for us to imagine.
The wariness of getting “too close” to animals and of overestimating their cognitive abilities still exists — and for good reason. Selection effects, where researchers are more likely to work with an animal if they antecedently believe that the animal can do interesting things, remain at work. And publishing incentives reward impressive and surprising skills. There’s no market for a glowing profile of the scientist that found a deflationary explanation for an animal behavior. Few people are going to tweet a video of a salmon failing the mirror test.
So a common dichotomy pits animal enthusiasts who over-attribute mentality to animals against stern, hard-nosed buzzkills who maintain their distance and thus their methodological rigor. But doing hard-nosed and rigorous work requires something different — something akin to love: a holistic understanding of the animal, born from long periods of sustained attention. For this sort of work, the best motivator is affection. Indeed, one of De Waal’s lessons is that one cannot study animal intelligence “without an intuitive understanding grounded in love and respect.”
And now, an entirely different form of intelligence has arrived. The study of AI lacks coherent methods. AI capabilities are superhuman in some ways and dangerously limited in others. And no one is yet sure what to make of something so human but alien at the same time. What lessons does the past century of research in animal cognition hold for how to think about today’s AI systems?
In many ways, we are in our understanding of large language models where the study of animals was in the middle of the 20th century. Like animal cognition, the field of AI is overshadowed by founding traumas — cases in which credulity and anthropomorphism have led researchers to exaggerate and misconstrue the capabilities of AI systems. Researchers are well aware of the ELIZA effect, our tendency to readily read human-like intentionality into even very simple AI systems — so named for an early chatbot built in 1964 that used simple heuristics to imitate a psychoanalyst. They remember past AI winters, when AI progress had been overpromised and underdelivered and disappointed funders cut jobs. Many are understandably wary of credulity and hype.
And few topics are more hype-prone right now than language models. One way to impose rigor and combat our natural tendency to anthropomorphize is to forbid using psychological language to describe AI systems. As Shevlin and Halina argue in Nature Machine Intelligence, using certain psychological terms like “theory of mind,” “motivation,” and “understanding” can be misleading if they encourage people to make inferences which might hold for human minds, but not for AI systems.
6
If GPT-4 can be said to have beliefs, its beliefs must be in some sense very different from human beliefs. If GPT-4 can be said to have a theory of mind, its theory of mind must have developed in a very different way than ours did. (More speculatively: if GPT-6 will be conscious, it will have experiences which are quite strange and hard for us to imagine.)
Another way to combat confusion is to emphasize what the models are trained to do and how different that is from humans: large language models have learned to produce text in a very different way than we have. But as with behaviorism, these understandable prohibitions risk leading us to retreat to a narrow explanation of AI behavior that underestimates what models can actually do. Describing language models as “just” predicting the next token doesn’t do justice to the surprising ways they operate.
For example, it’s now clear that language models don’t just model shallow statistical text patterns — they model aspects of the world behind the text. Indeed, it’s possible to identify “facts” that a large language model takes to be true. Researchers found that they could selectively edit a language model to make it “believe” that the Eiffel Tower is located in the city of Rome.
7
The models outputs reflect this new “belief” in a way that is both precise (its outputs don’t simply move all of Paris to Rome, only the Eiffel Tower) and also generalized (in a wide range of differently-worded questions about Rome or the Eiffel tower, it will produce outputs consistent with the Eiffel Tower being in Rome, such as recommending it as a tourist destination for visitors to Italy). More recently, another group trained a language model on transcripts of a simple board game, and then probed its activations to find it had learned to represent different states of the board.
8
In other words, the model wasn't just combing its data to identify the next move. It had developed an internal picture of the game board and intuited its rules.
Just as Skinner thought that the differences between rats, apes, and humans were in some sense superficial, regarding all LLMs as just next-token predictors can blind one to the important differences between them. If we say that both GPT-2 and GPT-4 are “stochastic parrots,” then what explains the fact that GPT-4 can write a Shakepsearean sonnet about how to use a Python package, pass the bar, or solve difficult logic puzzles — skills far outside of GPT-2’s capabilities? We need to investigate the output of each model and explain why they are different.
As with animal cognition, a desire to impose rigor can limit one’s ability to see how interesting the behavior to be explained is. Some are so dismissive of LLMs that they have a blanket policy of refusing to look at any outputs from large language models. This has the effect of making it impossible to have one’s mind changed about what the models are able to do. If one has decided in advance that an AI system is not that interesting, then one is less likely to look hard for interesting behaviors. Chomsky recently described ChatGPT as “a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question.”
As evidence for this claim, he declared in his May op-ed in the New York Times that because “these programs cannot explain the rules of English syntax, for example, they may well predict, incorrectly, that ‘John is too stubborn to talk to’ means that John is so stubborn that he will not talk to someone or other (rather than that he is too stubborn to be reasoned with).” Readers immediately noticed that ChatGPT can, in fact, infer the correct interpretation. The study of language models is still developing. We know so little about how they work that we would be wise to remember Chomsky’s admonition to Skinner: what is needed is research, not claims based on analogies to that small part of the literature in which one happens to be interested.
Fortunately, large language models have their equivalents of naturalists — enthusiasts, including academics and industry researchers as well as non-professionals, who spend many hours engaging with the models. People like them have often been at the bleeding edge of discovering what large language models are capable of, their failure modes and their idiosyncrasies. What LLM enthusiasts have brought to our understanding of AI are a plethora of interesting capacities unlocked by doing what they love — messing around with LLMs for hours.
These investigations revealed one way LLMs are like animals: if you reshape tasks in order to better match the subject’s natural limitations and abilities, you can elicit better performance. One obvious limitation of LLMs is that, while they are experts at continuing text, they don’t have any space to think in while answering a question. Simply adding “Let’s think step by step” to a prompt after you ask them a question can be thought of as giving the LLMs a place to think — their own outputs — and encouraging them to use it. For example, GPT-3 often initially fails at mathematical word problems. However, if asked the same question but with “Let’s think step by step,” the model will then respond with the steps of reasoning that are necessary for the right answer. Versions of this technique, called “Chain of Thought” prompting, have been discovered by ML academics as well as amateurs playing with early versions of GPT.
Chain of Thought has a natural gloss as enabling models to complete a task in a way that is suited to their capabilities, like a gibbon grabbing a dangling tool. Prompting models to explain their reasoning, letting them choose between outputs, or simply providing clearer instructions can also yield impressive results. The things that elicit capabilities may be simple or complex, but in either case, they require engagement with the models to discover.
But the same forces that make humans susceptible to the Clever Hans effect are present, if not stronger, in the case of language models. They are optimized to please us, and to interface with us through the most human-like possible medium, language. And they are good at responding to human input and picking up on user intentions. This makes users especially susceptible to confirmation bias. One LLM naturalist I spoke to — Janus, a husband-and-wife duo who write under a single name — warned me about the danger of projection: “If you have a narrative about what the model is, even if you’re not explicitly saying it, everything you say will contain that influence — and this will infect the model.” Users who see language models as simplistic may get simplistic behavior out of them; users who see large language models as conscious may, famously, get responses that make them appear conscious.
Today’s LLMs can seem like a perfect storm for throwing off our instinctive understanding of minds. They are optimized to act like people, to interact with us in language we understand. But they share less evolutionary heritage with us than bees and octopuses — in fact, they share none. This could make one pessimistic that we will either have to banish all talk of inner states — à la behaviorism — or else get hopelessly confused. Animal cognition offers hope that with care we can do better than either of these. To adopt empathy and respect for these models, in order to spend time with them and appreciate their “perspective,” does not mean assuming humanlike cognition or subjectivity. “People really should understand the ways that these models are very different from humans,” Janus said. “And they should think about that as part of why they are fascinating and beautiful.”
The strangeness of LLMs means that they are smart in their own way. They can neither be presumed to be mere next-token predictors, or to neatly map onto human psychology. As de Waal says of chimpanzees, thinking of large language models only in terms of whether they meet or fail to meet human standards of intelligence does not do them justice. Naive anthropomorphism can give us an inflated view of what they can do. It can also lead us to underestimate them by blinding us to complex and inhuman ways they have of being intelligent.
G. J. Romanes, Animal intelligence (London: Kegan Paul, Trench & Co., 1882).
↩
J. B. Watson, “Psychology as the behaviorist views it,” Psychological Review 20 (1913): 158-177.
↩
Starting in the late 1940s, Japanese scientists were pioneering methods that are now standard practice: habituating wild monkeys to the presence of humans, identifying individuals, and observing them throughout their lives. But at the time, their work was overlooked or dismissed by their Western counterparts
↩
Jane Goodall, In the Shadow of Man (1971, repr. New York: Houghton Mifflin, 2000): 37.
↩
To be sure, at times primatologists overplayed their hand in hypothesizing far more continuity between apes and humans — audacious attempts to raise chimpanzees as humans and teach them full-fledged language failed (“Nim Chimpsky” being the most notorious), although apes have learned some terms and sign language.
↩
Henry Shevlin and Marta Halina, “Apply rich psychological terms in AI with care,” Nature Machine Intelligence 1 (2019): 165-167
↩
Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov, "Locating and editing factual knowledge in gpt," arXiv preprint arXiv:2202.05262 (2022).
↩
Kenneth Li, "Do Large Language Models learn world models or just surface statistics?," The Gradient, 2023.
↩
Robert Long is a Philosophy Fellow at the Center for AI Safety in San Francisco. He holds a PhD in philosophy from NYU, and blogs at experiencemachines.substack.com.
By highlighting text and “starring” your selection, you can create a personal marker to a passage.
What you save is stored only on your specific browser locally, and is never sent to the server. Other visitors will not see your highlights, and you will not see your previously saved highlights when visiting the site through a different browser.
To add a highlight: after selecting a passage, click the star . It will add a quick-access bookmark.
To remove a highlight: after hovering over a previously saved highlight, click the cross . It will remove the bookmark.
To remove all saved highlights throughout the site, you can click here to completely clear your cache. All selections have been cleared.