a black cell phone

During the dial up years, AOL greeted us with a cheery, “Welcome!” Today, Siri apologizes when she doesn’t understand a query. And this author’s old cell phone never failed to say “hello” and “goodbye” when it turned on and off. Who decided that we need our soulless machines to act like eager to please friends? It’s not like we think they’re human.

Except that we kind of do. 

For over a decade, computer scientists, psychologists, and designers have studied the many ways in which people consistently treat their computers like human beings. As early as 2000, they confidently concluded that “individuals mindlessly apply social rules and expectations to computers.” Even though we are fully aware that electronics are not human, they act human enough (by being interactive, responding with words at times, and performing roles often filled by humans like answering questions and giving directions) that we draw on expectations and scripts from social interactions. Let’s look at a few examples given by Professors Clifford Nass and Youngme Moon of Stanford and Harvard, respectively.

One way that we treat computers like humans is by applying human stereotypes. In one experiment, a computer with either a male or female voice tutored research participants who then rated their virtual tutor. Conforming with gender stereotypes and biases, they rated the male computer more knowledgeable about technology and the female computer more knowledgeable about love and relationships. 

In another experiment, Korean participants were faced with a hypothetical situation and then advised on the best course of action by a computer that had either a Korean voice and a Korean face on screen, or a Caucasian voice and a Caucasian face on screen. Although fully aware that the voice and picture did not represent the person who wrote the computer’s script, the research subjects were more swayed by the Korean voice and rated the advice from the “Korean computer” as more intelligent and trustworthy. The effect was as strong as running the same experiment but with video chats with actual people of that ethnicity. 

People also bring norms of politeness and reciprocity to human-computer interactions. Just as a mediocre teacher who asks his or her students for feedback will get sugarcoated responses, Nass and Moon found that people were too polite to give honest feedback in the form of an on screen evaluation to a mediocrely helpful computer. But when they evaluated the computer’s helpfulness on another computer, people proved as forthcoming as students privately complaining about their terrible teacher. And just as we will generally go to greater lengths to help people that have helped us, Nass and Moon found that participants asked to “help” a computer spent much more time doing so with computers that had provided helpful search responses than computers that returned bad search results.

We don’t treat computers exactly like humans. We don’t cry when they die (usually) and we don’t excuse ourselves when we suddenly leave our computers to grab a coffee. But to a surprising extent, we do apply rules and expectations from the social world to interactions with computers. If you’ve ever wondered why designers at times seem to design our electronics as if we are simple children that don’t understand that they’re not human, now you have your answer. It’s because we don’t.

This post was written by Alex Mayyasi. Follow him on Twitter here or Google Plus. To get occasional notifications when we write blog posts, sign up for our email list.