Will we like tech more when it's more like us?

Anthropomorphism: Why understanding it is more important than ever



In 2017 alone, Amazon’s Alexa received over one million marriage proposals. You might imagine that most of these were jokes, you know—Alexa, will you marry me? Ha ha—but you might also agree it’s interesting that we seem far less likely to ask this—even as a joke—of the average dishwasher or calculator.

The public lamented for NASA’s Curiosity Rover when it was reported to hum “Happy Birthday” to itself all alone on Mars. When NASA’s Cassini probe’s 13-year mission orbiting Saturn ended, sympathetic messages and reported tears rolled in from scientists and non-scientists alike. Something similar happened when the Mars rover Opportunity (“Oppy”) went dark. We mourned the “deaths” of robots.

In 2019, a man abandoned his search for human female companionship and opted instead for an anatomically correct humanoid robot, complete with the ability to talk, smile, move her head, and receive sexual intimacy—spending his days with “her,” he reports that he could no longer imagine life without her, that he has fallen in love, and that he considers “Emma” his “robot wife.”

During an overnight shift, a Giant supermarket employee in Pennsylvania added a pair of large googly eyes to a robot programmed to detect spills and debris in the aisles. The executives of the global grocery company loved the addition and added them as a standard feature on the company’s nearly 500 robots in the United States. Since, the robot was dubbed “Marty,” and the community held a series of anniversary parties to celebrate the robot’s first 12 months of service—to which adults and children brought gifts, including well-wishing cards and a can of WD-40.

And most recently, discussion has flared about Blake Lemoine, the Google engineer who—after chatting with LaMDA, a conversational AI model, about literature, religion, and personhood—claimed that the model is sentient.

 

Anthropomorphism is the attribution of human mental states and characteristics to non-human entities—including AIs, robots, and other machines. Anyone who has spent really any time with me in the past five years has likely heard me talk about this phenomenon and why it matters today, especially as we introduce robots into our society and debate over the role we should give them.

It's all the more interesting because anthropomorphism seems to start at a level below conscious thought, present from early childhood. We might see an object that seems to respond to the environment and we’ll follow its “gaze” if it turns towards something suddenly. We might see geometric shapes moving around the screen in a certain pattern and describe what happened with elaborate personality ascriptions. We might give our Roomba vacuum cleaners human names, say “excuse me” if we cross in its path while it’s trying to clean, introduce it to family and friends, and even clean for it, “so it can get a rest.”

A robot that could be said to look "happy," with only the top half of the eyes on its screen visible.

I conducted the experiment reported in my thesis with the help of one of Anki’s Vector robots. I took this photo in April 2020.

Our tendency to anthropomorphize seems to have some pretty serious implications for how we interact with the machine underneath—including how much we like it, how we interact with it, how much we trust it, even how we regard it as a moral agent or moral patient.

Last year, I completed my undergraduate thesis on the topic, which included the results of a new empirical study. At the encouragement of my academic advisor Jonathan Phillips (to whom I will forever be infinitely grateful), I since submitted the results for publication and presented them at the Cognitive Science Society’s CogSci 2021 global conference. I loved the process, including and especially the opportunity to discuss these issues with researchers from around the world.

In my next few posts here, I’d like to share more about what we understand anthropomorphism to be, why it’s worth paying attention to, and why I think it’s more important than ever for us to deepen our understanding of how it works.

 

But first: Who should care about this, and why?

Short answer: Fully acknowledging my bias in my own interest and time spent with this topic—I truly think the answer here is “everyone.” More precisely, I’d hazard to say that all those who participate in our increasingly tech-rich society have a stake in this topic, its implications, and its associated questions.

Longer answer: This topic will have direct implications for those in certain professions and roles, but ultimately will be relevant to all those who participate in our increasingly tech-rich society.

In particular:

  • Designers, product managers, and developers

  • Those in branding, marketing, advertising

  • Teachers, educators, and trainers

  • Those involved in regulating and determining the legal status of various of these technologies

…people in these roles have distinct, pronounced influence in the forces at play here. I would argue that with their particular ability to shape those forces comes the particular need to understand them.

To effectively and appropriately set the stage for optimal trust, enjoyment, and use of a product—including the sort of interactions users are more likely to have with the product—designers, product managers, and developers need to understand the visual cues (like eyes and other suggestions of a humanlike face or body), motion cues (like contingent or interactive motion in context), and interactions (like those based on NLP) that trigger certain perceptions of virtual assistants, personal robots, and other algorithms and machines—even vacuum cleaners. The bottom-up cues that prompt us so powerfully to perceive and treat technology differently are in the hands of those responsible for the design and development of these products—and those responsible for the design and development of technology products therefore have a compelling need to understand these forces.

Those in brand, marketing, advertising, education, and professional training—anyone involved in influencing how the product is officially described—also have a critical role to play here. As it turns out, the way we talk about these products also matters in influencing how people perceive and expect to interact with these products. (More on this later.) 

Lawyers, policymakers, and politicians will also play a critical role in determining the conceptual and legal analogies that impact our collective understanding of the place we give these machines in our society. 

Finally—I’d suggest again that everyone has a stake in this. The issue of AIs and robots that seem more like us, more human, isn’t going away. After all—if humans are social, emotional creatures, what better, more convenient, more relatable way for our technology to interact with us than in social and emotional ways? What better way to be more familiar, less intimidating, and easier to use than to seem more like us, to interact with us as another human might?

Indeed, it seems like the issue of anthropomorphic Ais, robots, and other machines is only becoming more prevalent in our daily lives.  

Take sex robots, for example. A step apart from sex toys (note the shift in metaphor!) in a $30B and rising sex tech industry, these dolls look like humans (usually, human women)—idealized, fantasized, fetishized; they move like humans; some even talk like humans; some even “struggle” or shy away from sexual advances with a “resistance” setting. Some are progressing towards designs to also conduct household chores. They’re already on the market. There are “brothels” featuring them in Europe (not yet in the US—in 2018, one was blocked from opening in Houston, Texas). In the US, a Congressional bill to ban the import, distribution, and possession of sex dolls that look like children, called the CREEPER (Curbing Realistic Exploitative Electronic Pedophilic Robots) Act, passed through the House in 2018 and was introduced to the Senate in 2020 (yet to be passed). This issue—and the many, many questions that come with it, including ones like “Do you think it would be cheating if your significant other used/engaged with a sex robot?” (Is it just another sex toy? Or is it more like a potentially emotionally engaging companion?)—is already underway, developing slowly and surely in the background, already here.

Of course, there’s also Alexa, Siri, and the great multitude of other “AI” virtual or robotic assistants already out in the popular market. They’re already here, too. And there are lots of interesting questions that come with how we anthropomorphically design and describe them. Some are questioning what it might mean, for example, for so many of them to have default female voices and to be widely, colloquially described using she/her pronouns. What might these design and description choices do to our expectations of and interactions with these technologies—desirable, undesirable, and seemingly random, like all those marriage proposals Alexa received? And what might they do to our interactions with other humans? Might these design and description choices of the technology accustom us to seeing human women as obedient and servile, and to directing, speaking down to, or comfortably insulting human women? (To this last set of questions, the UN said yes in 2019, warning of the negative consequences of the personal assistants in promoting gendered stereotypes.)

As participants in society and as users of technology, we all have reason to want to understand the forces at play here—to better grasp our own behaviors and to become more informed consumers, voters, and educators of the next generation—and of each other, through everyday example and dialogue. 

We all have reason to care about this topic, especially as technology continues to play an increasing role in our everyday lives—and as we meanwhile lay the groundwork for the place we will give these machines in our social, ethical, functional, and legal frameworks.

In a world that’s only seeing more tech—from social media to refrigerators to the algorithms that take an ever-increasing role in all of it—enmeshed in our daily lives all the time, a phenomenon that changes how we relate to our technology seems vital to consider. Especially when that phenomenon has sometimes deep consequences—influencing affinity, trust, perceptions of responsibility (even moral responsibility), and the sort of interactions we expect to have with the tech… and especially when that same phenomenon may influence how we relate to each other as humans. 

For anyone even a little interested in how we interact with technology, and how we can design for interactions that are good for humans—this one’s for you.

 

In this series, I hope to hit:

  • Why we anthropomorphize—and why it seems to happen so automatically, even without our realizing it

  • What prompts us to anthropomorphize, and what factors make it even more likely

  • What we attribute when we anthropomorphize—and what that tells us about what we think it is to be human

  • Why it matters, or a response to “So what? It’s not like I actually think my car hates me” — and more about Blake Lemoine

  • Some of the interesting, thorny implications of anthropomorphism, focusing in particular on how it may impact how much we trust machines and regard them in our moral frameworks

  • An interlude thinking about how an “anthropomorphic effect” might impact macrotrends in tech adoption through a system dynamics lens

  • The power of language and why metaphors matter

  • Some more color on my personal favorite corners of this issue and particular interests, including detail around the empirical study I ran and the results published—why they’re interesting, what we still have left to learn, and why it matters

 

And as always—I welcome any questions, comments, or requests for where to take the discussion!