Conversations with Machines

Happy 2020 everybody! 🎉 New Year’s starting with the *big* questions and a great (ok, last years) podcast episode: What makes a human a human? What makes a machine a machine?

„Weizenbaum had programmed ELIZA to interact in the style of a psychotherapist and it was pretty convincing. It gave the illusion of empathy even though it was just simple code“

Listen here to „The ELIZA Effect“:

Also if you haven’t seen it already I highly recommend watching „Plug and Pray“
official Trailer:

Image source: Wikipedia

Q – a genderless voice

Cool and interesting project as it’s a well-known fact that voice based digital assistants like Siri convey social roles via their voice.
When we listen to a voice based assistant, we will imply implicit assumptions like the mentioned social gender roles we have learned and internalised over a long period of time – despite we are knowing it is a machine. And as technology is using (by default) female voices for *assistant*like-roles („How can I help?“) they even support/promote classical gender stereotypes such as women being perceived as „warm“, „helpful“ and „cooperative“ rather than „dominant“, „competitive“ and „independent“ – which correspond more to male gender stereotypes.


Deus Ex Machina

„Would you like to print out your blessing?“

Bless U-2 offers automatic blessings in different languages (even in Hessisch which is a German accent) to the faithful and yes, it even beams lights from its hands!

The robot is an experiment and it was part of an art exhibition and it is supposed to trigger conversations about the ethics of human-machine interaction.

Despite the fact, that for me personally, the design/ the appearance of the robot falls into the uncanny valley and I am not interested in religion, this is interesting to me – as I am interested and research the social cognitions and attributions when humans interact with machines/computers. How, for example, is such a blessing from a robot perceived by humans? Can a robot give a blessing? And if not, why then can a human give a blessing? Whats the difference?

Computers are social actors, again and again

"Our acceptance of seemingly autonomous voice assistants will depend on trust. And trust demands being able to distinguish when we’re talking to a human and when we’re talking to an AI.”

— Anna Dahlström (@annadahlstrom) May 12, 2018

See also:

Google Duplex‘ natural speech pattern: It’s a human? No it’s not!

I think we now go really deep into the uncanny valley. I think it is of psychological and also sociological relevance that we know if we speak to another human or a machine.The point is, it may be perceived as cheating, playing with trust.

Permanent usage of pop-ups and interruptions hurt your orgas credibility

Just stumbled again upon this quote:

„Our studies showed that ads that pop up in new browser windows hurt a site’s credibility. This makes sense; people usually go to Web sites with goals in mind, and they expect the site to help them accomplish those goals. Pop-up ads are a distraction and a clear sign the site is not designed to help users as much as possible. Pop-up ads make people feel used, perhaps even betrayed.“

(Brian J. Fogg. 2003 Persuasive technology: using computers to change what we think and do)

Even though the quote mentions specifically „ads“, this is also true for newsletter subscription reminders (which may be also „ads“ in peoples perception) and other stuff which is constantly popping up and therefore interrupts people.

Suppose you go into a shop and just want to look around with no intention to buy something. What impression does it make if the store owner stands behind you, constantly asking if you want to come more often, maybe tomorrow, and despite the fact that you say no, she keeps coming back, asking again and again. This is what the pop up window does, literally. Will you ever step into this store again? And will you perhaps tell your friends about this experience and tell them how crazy they are and tell them not to go there?

And: Do you want to be this store owner?

Designing for credibility and Stanford Website credibility guidelines

Credibility is a very important factor in both: Human to human Interaction, but also when we interact with computers/technology. Unfortunately, the factor is often neglected in the latter.

As credibility is attributed to others, it is a subjectively perceived and therefore experienced quality. Nevertheless, it is not completely random and based on subjective perceptions – most people of a society agree on what is perceived as credible or not. There are key dimensions such as trustworthiness and competence based on perceived cues which play a role when it comes to an evaluation of the perceived credibility – of course depending on culture and socialization.

Websites/Apps also do good when perceived as credible. Credibility can refer to several things like the content, the messages sent, the tone of voice, the behavior of the site/app, the visual design etc. Users will make (mostly unconscious) judgements about the company’s/organizations credibility based on these factors. If a website is perceived as credible, this can increase the decision to trust your company/organization over another.

For a starting point how to design credible websites, you can use the Stanford Credibility Guidelines (see image above) which are based on solid research.

Friend or foe? Social presence is an important factor to consider in Interaction /UX Design

Ever yelled at your computer? Congrats! You’re in good company applying social responses to inanimate things. We as humans are wired to be social creatures. And even digital products may trigger social responses and you might not even be aware of it.

Often, computing technology conveys some sort of social presence. Resulting to that fact people do respond to this technology often as though the technology item is a social entity like eg another human (For reference on that see e.g Reeves, B. and Nass, C. ;1996. The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places, Stanford University)

Here are a few examples how social presence is conveyed (among other things):

Fig 01: Keepon, a social robot/beatbot for autistic children

Keepon is a computing technology in form of a robot. It conveys social presence for us humans through (obvious) physical cues like a face. It has eyes and also some physical skills like it starts dancing/moving and making funny noises when it listens to music or when you interact with it.

Fig 02: Siri, a voice based digital assistant

Also, voice based digital assistants like Siri convey – in this case despite their very futuristic and techy appearance – social presence through language cues but also through social roles like gender which we implicitly perceive. We will imply implicit assumptions and personality behind it – despite we are knowing it is a machine.

Fig 03: Unhelpful error message found in moodle a while ago

But of course, also technology with no obvious physical cues like faces or other cues like voice/spoken language at all can convey social presence. For example simply in presenting an error message, respectively by using written language/plain copy to communicate with a human. The example error message above is not providing any helpful hint what exactly has happened/has gone wrong and also it uses a strange language (It says: Error found. Error only could be removed by a programmer. Course not usable) – so I will attribute this behavior to the product/company behind it: eg they are blaming me for that error only a programmer can remove, so they are unhelpful and they don’t care about me /their customers/users etc.

So, in all these cases, considering theories provided by social psychology research and lessons learned from human-to-human interaction can serve as a valuable source of information and also guidance when it comes to making design decisions for interactive products or any computing technology. For better understanding, I recommend trying out how the communication would be perceived if it were a human-to-human interaction instead of a human-computer interaction. This works quite well in e.g a role-playing game within your team. One is the user while the other person is playing the computer or application. So you can pay attention to how the communication will feel like. Another way may be simply writing down the interaction as a System/User dialogue and behavior, considering the System as a character.

With that in mind, I’ll leave you with Paul Watzlawick’s “One cannot not communicate”.

The social (and ethical) implications of your digital assistant

“People talk to Siri about all kinds of things, including when they’re having a stressful day or have something serious on their mind. They turn to Siri in emergencies or when they want guidance on living a healthier life,” states the April ad for a “Siri Software Engineer, Health and Wellness” in Santa Clara, California.

Read the article here:

Gender stereotypes applied to machines

Amongst other things when it comes to gender stereotyping, studies show that dominant behavior by males tends to be positively received by western society since dominant men tend to be perceived as “independent”,“assertive“ and „successful“, whereas dominant women tend to be perceived as “pushy” or “bossy”.

Nass et al did a series of studies in the late 90s to determine if computers trigger the same scripts and cognitive schemas associated with gender stereotyping – as they conducted an experiment they found out their hypothesis that people mindlessly apply gender stereotypes to computers were supported.

During this experiment, participants used computers for three separate sessions:
a) tutoring (via voice output), b) testing (screen-based), and c) evaluation (via voice output).

„The results supported the hypothesis that individuals would mindlessly gender-stereotype computers. Both male and female participants found the female-voiced evaluator computer to be significantly less friendly than the male-voiced evaluator, even though the content of their comments was identical. In addition, the generally positive praise from a male-voiced computer was more compelling than the same comments from a female-voiced computer: Participants thought the tutor computer was significantly more competent (and friendlier) when it was praised by a male-voiced computer, compared to when it was praised by a female-voiced computer. And finally, the female-voiced tutor computer was rated as significantly more informative about love and relationships compared to the male-voiced tutor, while the male-voiced tutor was rated as significantly more informative about computers.“

(Nass et al., 1997).

The cute, the bad and the evil?

Is your cute looking, nice sounding assistant (or: robot/chatbot) really that cute as it seems to be?

This article on FastCoDesign gives some food for thought.
Subtle social cues, which are associated with cuteness, friendliness and so forth – originally designed with good intentions for giving you a „better experience“ – may fool you because lack of transparency and can so lead to queasy feelings which are based on cognitive dissonance.

[…]“These new affordances don’t show users what they can do with a technology, they describe what a technology won’t do to users: They won’t hurt us, they won’t spy on us, they won’t reveal our secrets. They are literally user-friendly. Yet could there be hidden costs for users when AI acts like a friend? […] We need design that doesn’t just show people how to use technology, but shows how their technology is using them.“

Photo: My sweet, little,friendly and old beatbot „keepon“ which for sure does not spy on me (it is developed for kids with autism syndroms) Isn’t it cute? :)

Poncho: weather with a personality

Poncho is a weather service with a personality that gives you the weather every day in a fun way. I first tried the chatbot on facebook messenger. I was really surprised how natural our talk seemed. He definitely passes the Turing test as long as you keep talking to him within his comfort zone (weather & apps). :D

Who Do We Become When We Talk to Machines?

„Can a broken robot break a child?“

Awesome Sherry Turkle about conversation in the digital age and how replacing face-to-face communication with smartphones (or even growing up in company with digital assistants, smart toys, and bots and connect emotionally with them) may diminishing people’s capacity for empathy.

On Personality in Computer Interfaces and Human Computer Interaction

„The individual’s interaction with media – like computers, television and other media – is fundamentally social and fundamentally natural“

Additionally, I’d like to add another quote from their book „The media equation“, which I find is an important one:

„Personality can creep in everywhere – the language in error messages, user prompts, methods for navigating options and even choices of type font and layout. Even if the design domain is a twelve-character LCD panel, personality is relevant.“

Reeves, B. and Nass, C. (1996) The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. P. 97

The stereotype content model: A social psychology theory as a framework for brand perception and user experience work



Social psychology theories like the stereotype content model may have a huge impact in your user experience work, as more and more interactions between companies and customers take place through a „digital window“ called a user interface. The perception and therefore the behaviour of digital interfaces had become more and more important to brand perception. This can serve as a basis for improving your customer loyalty as a growing body of research suggests that people have emotional relationships with brands that resemble relations between people.
„The stereotype content model: A social psychology theory as a framework for brand perception and user experience work“ weiterlesen

GDPR Cookie Consent mit Real Cookie Banner