Anthropomorphism: What it is and a bit of how it works

In my last post, I introduced this series on anthropomorphism. In this post, I aim to describe more about what anthropomorphism is and about our understanding so far of how it works.

Anthropomorphism is a process of inductive inference about unobservable states or characteristics in a non-human entity—rather than descriptions of its observable behavior. Mental states, for example, which include beliefs, desires, goals, attention, perceptions, and emotional states, cannot be perceived or observed directly, but are rather inferred by observers in others. So, it’s us interpreting what’s going on with some object based on what we’re seeing—and what we’re interpreting is something we’d associate with humans, rather than objects, like thoughts and feelings.

Okay, so if you see someone in the corner of a coffee shop with their head down, facial features set in a certain way, and wet tracks on their cheeks, you might say that they’re sad. You can’t see inside their brain (so to speak) and know they’re sad. Technically, rather than saying “They’re sad,” as an observed fact, it would be more accurate to describe what you actually can see— that their head is down, that their eyebrows are closer together with a crinkle between them, etc. But you infer what’s going on with them based on these observables: and you say simply, “They’re sad.”

And honestly, if someone asked you, “What’s going on with that person in the corner,” it would probably seem pretty weird if you responded anything like, “Their head is down, their eyebrows are closer together with a crinkle between them, and there are wet tracks on their cheeks.” The expected answer is more something like just, “They’re sad.”

Why is that?

The philosopher Daniel Dennett’s concept of the intentional stance [1] is helpful here. Dennett defined three strategies that humans adopt (automatically) to more efficiently explain and predict our environments. He calls these strategies “stances.”

The physical stance

To explain and predict the behavior of a system using the physical stance, we use information about what we see in its physical state now (as in the arrangement of molecules and details of their movement trajectories) along with what we know about the laws of physics (that is, our knowledge about how these molecules’ movement might interact the movement of subsequent molecules, etc.). Chemists who have the project of explaining and predicting the behavior of molecules in a laboratory setting might find great success in using this stance.

 

The design stance

To explain and predict the behavior of a system using the design stance, we use our knowledge about the way the system was designed to function. A person referencing a clock for the current time might predict that the hands will be pointing at different numbers in an hour, per the clock’s design to do so, as time advances. Importantly, this person does not have to know the precise physical mechanisms behind all the molecules that constitute the clock, nor the physical laws that dictate their movement, to make this prediction: she has sufficient (and far more efficient!) predictive power from adopting the design stance in this instance, rather than the physical stance.

 

The intentional stance

Finally, to explain and predict the behavior of a system using the intentional stance, we infer beliefs, desires, intentions, and other mental states in a system. To do this, we decide to treat the object as a rational agent, inferring what beliefs, then desires, then goals and intentions the agent, in Dennett’s words, “ought to have, given its place in the world and its purpose” (Dennett, 1981, p. 61). Dennett argues that we gain more predictive power using this stance than either of the other two stances when we are dealing with sufficiently complex systems.

 

When Dennett wrote about these three stances, his original proposal was a pretty strong one—arguing that there’s nothing more to being a “true believer” (an entity that has a conscious mind capable of intentionality and understanding) than being interpretable to others via the intentional stance. For the purposes of discussing anthropomorphism, we don’t need to take a position on whether the thing that’s being anthropomorphized actually has these mental states (independently from or per our tendency to anthropomorphize it)—but we might consider the idea of the intentional stance as a useful cognitive conservation phenomenon.

More fully put, we might say humans tend to unconsciously (that is, without deciding to do so—automatically) revert to a stance that includes the attribution of intentions, desires, emotions, beliefs, and other mental states to non-human entities, in order to more easily and less effortfully predict and explain those entities’ behavior—especially when certain conditions are present.

This way of understanding anthropomorphism is consistent with Waytz, Gray, Epley, and Wegner’s more recent [2] contention that people tend to use intentional mental states to explain non-human actions—especially when under cognitive load—because they are the states that most efficiently (least effortfully) explain the behavior of apparently independent entities.

For example, when a car behaves in a perfectly predictable way in response to your actions, it seems like a mindless object. In that case, we’d most likely predict its movements based on the design stance. It’s an object that turns left when I turn the steering wheel left, and so on. But think of a time when that same car starts lurching forward (in the moment—from our perspective—nearly inexplicably): we might say things like “This car’s going crazy” or “This car hates me” or “C’mon, Magnolia, you got this, honey.” Otherwise put, we revert to the intentional stance in our automatic reaction to wrap our heads around the situation and predict what might happen next.

We take that same intentional stance with interpreting the behavior of that person in the corner of the coffee shop. Rather than taking the cognitively and temporally expensive route of considering all the physical occurrences (getting to that molecules-hitting-molecules level) that led to the situation in front of you, or taking what you know about how the person was designed to react in a particular set of circumstances to explain what’s happening, we infer something simple: They’re sad.

 

Anthropomorphism is what happens when we do that to non-humans—that is, when we infer those humanlike states and characteristics, including the experience of emotions like sadness, in non-human entities.

 

A still from the animation from the Heider & Simmel 1944 study, showing two triangles and a circle

The Heider & Simmel (1944) study animation

A cool illustration of this idea comes from way back from the 1940s. In 1944, Fritz Heider and Marianne Simmel presented 34 adults with a video of a large triangle, a small triangle, and a circle moving around on a screen. Participants were asked to explain what happened in the film.

 Instead of describing the scene in purely geometrical terms—e.g. “The larger triangle moves to the left while the smaller triangle moves up and the circle moves in a curve around them”—participants told narratives of animate agents with intentions, emotions, and character attributes. For example, “A man has planned to meet a girl and the girl comes along with another man. The first man tells the second to go; the second tells the first, and he shakes his head. Then the two men have a fight…”; another said, “our hero does not like the interruption…he attacks triangle-one rather vigorously (maybe the big bully said a bad word)…” ([3]—p. 246-7). Without prompting, the adult participants attributed mental states and characteristics normally associated with humans—even wild backstories featuring gender role ascriptions—to these clearly non-human entities.

 

But why? It’s not that it was necessarily fewer words, and it’s not like we go around describing everything around us in terms of intentional and emotional states. What prompts us to anthropomorphize?

I’ll talk about that in my next post. For now, please feel free to write in the comments or message me with any thoughts, questions, or requests on the content so far!

 

[1]: Dennett, D. C. (1971). Intentional systems. The Journal of Philosophy, 68(4), 87-106; and Dennett, D. C. (1981). True Believers: The intentional strategy and why it works. Herbert Spencer Lectures 1979, reprinted in (ed.) A. F. Heath, Scientific Explanation (Oxford: Oxford University Press, 1981).

[2]: Waytz, A., Gray, K., Epley, N., & Wegner, D. M. (2010). Causes and consequences of mind perception. Trends in Cognitive Sciences, 14, 383-388.

[3]: Heider, F., & Simmel, M. (1944). An experimental study of apparent behavior. The American Journal of Psychology, 57(2), 243-259.