Missing: Female UX role models

Are their any “famous” women in UX? The level of Don Norman, Luke W, Alan Cooper, and Jared Spool? Where you say their names to other UXers and they know who you’re talking about?

— Katie Swindler (@KatieSwindlerUX) August 19, 2021

… yep this is a very good question and part of the problem aka missing role models for young female interaction designers (surprise: this problem also appears in science) as this perception of a male dominated field will influence their self-concept

Q – a genderless voice

Cool and interesting project as it’s a well-known fact that voice based digital assistants like Siri convey social roles via their voice.
When we listen to a voice based assistant, we will imply implicit assumptions like the mentioned social gender roles we have learned and internalised over a long period of time – despite we are knowing it is a machine. And as technology is using (by default) female voices for *assistant*like-roles („How can I help?“) they even support/promote classical gender stereotypes such as women being perceived as „warm“, „helpful“ and „cooperative“ rather than „dominant“, „competitive“ and „independent“ – which correspond more to male gender stereotypes.

+++ https://www.genderlessvoice.com

Deus Ex Machina

„Would you like to print out your blessing?“

Bless U-2 offers automatic blessings in different languages (even in Hessisch which is a German accent) to the faithful and yes, it even beams lights from its hands!

The robot is an experiment and it was part of an art exhibition and it is supposed to trigger conversations about the ethics of human-machine interaction.

Despite the fact, that for me personally, the design/ the appearance of the robot falls into the uncanny valley and I am not interested in religion, this is interesting to me – as I am interested and research the social cognitions and attributions when humans interact with machines/computers. How, for example, is such a blessing from a robot perceived by humans? Can a robot give a blessing? And if not, why then can a human give a blessing? Whats the difference?

Orga culture and UX: Competition vs common vision

via GIPHY

Image: Cooperation, exemplary image. ;)

The famous robbers cave field experiment conducted by Muzafer Sherif (1954, 1958, 1961) investigated how and why group conflicts occur.

Sherif argued that conflict between groups (intergroup conflicts) occur in case of two groups are in competition for limited resources (like e.g recognition etc)

The research group arranged an artificial competitional environment (which does not necessarily reflect real life conditions) where friction conflict and frustration between the groups was likely to occur.It didn’t take long and the predictions of the researchers came true, the two groups had become strong rivals and behaved hostile to each other. The conflicts only subsided as the researchers began to create situations in which the opposing groups had to solve problems together, thus creating a common goal and vision to achieve these goals.

Sherifs studies could teach us a little bit about organizational conflicts like eg silo thinking, political and ego driven processes, etc – which we are often confronted with in our work as experience designers and which have a direct impact on product development and at the end of the day: the users’s experience. The experiemnts also emphasize how important it is to have common goals and therefore a common vision.

The study was and is ethically questionable and was also biased. Nevertheless I think it can teach us about how intergroup conflicts occur, what to do against it – also in organizational structures and teams.

Reference:
Sherif, M. (1954). Experimental study of positive and negative intergroup attitudes between experimentally produced groups: robbers cave study.

UX/ Design principle: Give instead of taking

If you are interested in specific information people should provide to your organization think of reasons in terms of how it would benefit those people and not how it benefits you as the business – because we are really more interested in people who are interested in us and not so much in selfish people.
If there is no benefit for the people then perhaps you should not ask for that information.

Computers are social actors, again and again

"Our acceptance of seemingly autonomous voice assistants will depend on trust. And trust demands being able to distinguish when we’re talking to a human and when we’re talking to an AI.”https://t.co/xFEqIY0qE4

— Anna Dahlström (@annadahlstrom) May 12, 2018

See also: https://www.guerillagirl.de/2018/05/09/google-duplex-natural-speech-pattern-its-a-human-no-its-not/

Permanent usage of pop-ups and interruptions hurt your orgas credibility

Just stumbled again upon this quote:

„Our studies showed that ads that pop up in new browser windows hurt a site’s credibility. This makes sense; people usually go to Web sites with goals in mind, and they expect the site to help them accomplish those goals. Pop-up ads are a distraction and a clear sign the site is not designed to help users as much as possible. Pop-up ads make people feel used, perhaps even betrayed.“

(Brian J. Fogg. 2003 Persuasive technology: using computers to change what we think and do)

Even though the quote mentions specifically „ads“, this is also true for newsletter subscription reminders (which may be also „ads“ in peoples perception) and other stuff which is constantly popping up and therefore interrupts people.

Suppose you go into a shop and just want to look around with no intention to buy something. What impression does it make if the store owner stands behind you, constantly asking if you want to come more often, maybe tomorrow, and despite the fact that you say no, she keeps coming back, asking again and again. This is what the pop up window does, literally. Will you ever step into this store again? And will you perhaps tell your friends about this experience and tell them how crazy they are and tell them not to go there?

And: Do you want to be this store owner?

Designing for credibility and Stanford Website credibility guidelines

Credibility is a very important factor in both: Human to human Interaction, but also when we interact with computers/technology. Unfortunately, the factor is often neglected in the latter.

As credibility is attributed to others, it is a subjectively perceived and therefore experienced quality. Nevertheless, it is not completely random and based on subjective perceptions – most people of a society agree on what is perceived as credible or not. There are key dimensions such as trustworthiness and competence based on perceived cues which play a role when it comes to an evaluation of the perceived credibility – of course depending on culture and socialization.

Websites/Apps also do good when perceived as credible. Credibility can refer to several things like the content, the messages sent, the tone of voice, the behavior of the site/app, the visual design etc. Users will make (mostly unconscious) judgements about the company’s/organizations credibility based on these factors. If a website is perceived as credible, this can increase the decision to trust your company/organization over another.

For a starting point how to design credible websites, you can use the Stanford Credibility Guidelines (see image above) which are based on solid research.

Friend or foe? Social presence is an important factor to consider in Interaction /UX Design

Ever yelled at your computer? Congrats! You’re in good company applying social responses to inanimate things. We as humans are wired to be social creatures. And even digital products may trigger social responses and you might not even be aware of it.

Often, computing technology conveys some sort of social presence. Resulting to that fact people do respond to this technology often as though the technology item is a social entity like eg another human (For reference on that see e.g Reeves, B. and Nass, C. ;1996. The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places, Stanford University)

Here are a few examples how social presence is conveyed (among other things):

Fig 01: Keepon, a social robot/beatbot for autistic children

Keepon is a computing technology in form of a robot. It conveys social presence for us humans through (obvious) physical cues like a face. It has eyes and also some physical skills like it starts dancing/moving and making funny noises when it listens to music or when you interact with it.

Fig 02: Siri, a voice based digital assistant

Also, voice based digital assistants like Siri convey – in this case despite their very futuristic and techy appearance – social presence through language cues but also through social roles like gender which we implicitly perceive. We will imply implicit assumptions and personality behind it – despite we are knowing it is a machine.

Fig 03: Unhelpful error message found in moodle a while ago

But of course, also technology with no obvious physical cues like faces or other cues like voice/spoken language at all can convey social presence. For example simply in presenting an error message, respectively by using written language/plain copy to communicate with a human. The example error message above is not providing any helpful hint what exactly has happened/has gone wrong and also it uses a strange language (It says: Error found. Error only could be removed by a programmer. Course not usable) – so I will attribute this behavior to the product/company behind it: eg they are blaming me for that error only a programmer can remove, so they are unhelpful and they don’t care about me /their customers/users etc.

So, in all these cases, considering theories provided by social psychology research and lessons learned from human-to-human interaction can serve as a valuable source of information and also guidance when it comes to making design decisions for interactive products or any computing technology. For better understanding, I recommend trying out how the communication would be perceived if it were a human-to-human interaction instead of a human-computer interaction. This works quite well in e.g a role-playing game within your team. One is the user while the other person is playing the computer or application. So you can pay attention to how the communication will feel like. Another way may be simply writing down the interaction as a System/User dialogue and behavior, considering the System as a character.

With that in mind, I’ll leave you with Paul Watzlawick’s “One cannot not communicate”.

The stereotype threat phenomenon or: why it’s important to create environments for women in the tech industry

ladies that ux berlin birthday

Yesterday our Ladies that UX Berlin community celebrated its first anniversary (and it was awesome!). Time to reflect a question which often occurs: Why do we „separate“ women from men?

Here is one explanation based on social psychology, namely: stereotypes.

„The stereotype threat phenomenon or: why it’s important to create environments for women in the tech industry“ weiterlesen

On cognitive dissonance and rewards

„We try to reduce the dissonance between how we think we should act and how we actually act by changing one or the other“

This is interesting because it is contrary to incentive / economic theories which claim that the higher the reward will be the more likely people will change their mind – but only if there is a mismatch between my internal attitudes, values an or core beliefs and how I actually act.

The video shows excerpts of a classic experiment in social psychology conducted by Leon Festinger and James M Carlsmith in 1959 which is called „Cognitive Consequences of Forced Compliance“. Forced compliance is very closely related to the theory of cognitive dissonance which states that there will be the mental discomfort (psychological stress) experienced by a person who simultaneously holds two or more contradictory beliefs, ideas, or values. This, in turn, is related to one of the main principles in Gestalt Theory: The principle of good form.

The social (and ethical) implications of your digital assistant

“People talk to Siri about all kinds of things, including when they’re having a stressful day or have something serious on their mind. They turn to Siri in emergencies or when they want guidance on living a healthier life,” states the April ad for a “Siri Software Engineer, Health and Wellness” in Santa Clara, California.

Read the article here: https://qz.com/1078857/apples-siri-job-posting-seeks-engineers-with-psychology-skills-to-improve-its-counseling-abilities/

Gender stereotypes applied to machines

Amongst other things when it comes to gender stereotyping, studies show that dominant behavior by males tends to be positively received by western society since dominant men tend to be perceived as “independent”,“assertive“ and „successful“, whereas dominant women tend to be perceived as “pushy” or “bossy”.

Nass et al did a series of studies in the late 90s to determine if computers trigger the same scripts and cognitive schemas associated with gender stereotyping – as they conducted an experiment they found out their hypothesis that people mindlessly apply gender stereotypes to computers were supported.

During this experiment, participants used computers for three separate sessions:
a) tutoring (via voice output), b) testing (screen-based), and c) evaluation (via voice output).

„The results supported the hypothesis that individuals would mindlessly gender-stereotype computers. Both male and female participants found the female-voiced evaluator computer to be significantly less friendly than the male-voiced evaluator, even though the content of their comments was identical. In addition, the generally positive praise from a male-voiced computer was more compelling than the same comments from a female-voiced computer: Participants thought the tutor computer was significantly more competent (and friendlier) when it was praised by a male-voiced computer, compared to when it was praised by a female-voiced computer. And finally, the female-voiced tutor computer was rated as significantly more informative about love and relationships compared to the male-voiced tutor, while the male-voiced tutor was rated as significantly more informative about computers.“

(Nass et al., 1997).

Interaction & the social function of intelligence

„Like chess, a social interaction is typically a transaction between social partners. One animal may, for instance, wish by his own behaviour to change the behaviour of another; but since the second animal is himself reactive and intelligent the interaction soon becomes a two-way argument where each ‘player’ must be ready to change his tactics — and maybe his goals – as the game proceeds. Thus, over and above the cognitive skills which are required merely to perceive the current state of play (and they may be considerable), the social gamesman, like the chess player, must be capable of a special sort of forward planning. Given that each move in the game may call forth several alternative responses from the other player this forward planning will take the form of a decision tree, having its root in the current situation and growing branches corresponding to the moves considered in looking ahead at different possibilities. It asks for a level of intelligence which is, I submit, unparalleled in any other sphere of living. There may be, of course, strong and weak players – yet, as master or novice, we and most other members of complex primate societies have been in this game since we were babies.“

Nicholas K. Humphrey. The social function of intellect. In P. P. G. Bateson and R. A. Hinde, editors, Growing Points in Ethology, pages 303–317. Cambridge University Press, 1976.

The cute, the bad and the evil?


Is your cute looking, nice sounding assistant (or: robot/chatbot) really that cute as it seems to be?

This article on FastCoDesign gives some food for thought.
Subtle social cues, which are associated with cuteness, friendliness and so forth – originally designed with good intentions for giving you a „better experience“ – may fool you because lack of transparency and can so lead to queasy feelings which are based on cognitive dissonance.

[…]“These new affordances don’t show users what they can do with a technology, they describe what a technology won’t do to users: They won’t hurt us, they won’t spy on us, they won’t reveal our secrets. They are literally user-friendly. Yet could there be hidden costs for users when AI acts like a friend? […] We need design that doesn’t just show people how to use technology, but shows how their technology is using them.“

Photo: My sweet, little,friendly and old beatbot „keepon“ which for sure does not spy on me (it is developed for kids with autism syndroms) Isn’t it cute? :)

GDPR Cookie Consent mit Real Cookie Banner