“Design Principles help teams with decision making. A few simple principles or constructive questions will guide you towards making appropriate decisions”
Source: Dot Zero magazine, Issue 1
Imagine you ask your digital assistant or some conversational interface of your choice where you can get a “Berliner” now. What will happen? How could a conversational interface distuingish between terms with multiple semantic meanings, semantically ambiguous terms or how could conversational interfaces deal with contextual – in the example below: regional language differences and/or natural talking behaviors of people?
For example, how could a conversational interface (like a voice interface) know, what a person who lives in Berlin means when she uses the term “Pfannkuchen” which is in turn called “Berliner” in other regions of Germany. Other examples may be “Tempo” (= the brand vs the actual product which is a handkerchief ) , “Hamburger” (=People live in Hamburg vs the food)
People have so much cognitive processing/work to do if they just want to achieve something quick. They have to talk and getting used to the language of the machine instead of just talking like they are used to talk. So as usual, the machines dictate how we should talk to them. I hear often the terms “human-like” or “natural” interaction but to be honest – instead I sometimes feel more and more like going back to a command line interface. :)
I am delighted to announce our next (and this years last) Ladies that UX Berlin meetup. This time we will have an often overlooked but very important topic in the fields of User Experience – namely that UX is a team sport, literally.
Ladies that UX Berlin, Tuesday, December 12, 2017; 6:30-9:30PM.
This month’s event will be hosted by zalando. The location is: The Hub / zalando HQ, Tamara-Danz-Straße 1, 10243, Berlin. You can easily get there either via U Schlesisches Tor or S or U Warschauer Strasse.
Hertje Brodersen is a Freelance Experience Strategist and she will talk about Teamwork Anti-patterns and why not starting by optimizing our teams when we want to optimize our services, and Ayse Naz Pelen, who works as a digital product designer at zalando will discuss the question if and how design can help in shaping team culture.
HOW TO SIGN UP
Please RSVP here: https://www.meetup.com/de-DE/LTUX-Berlin/events/245557699/
Important note: RSVP registration on meetup.com will close 26 hours before the event (one Dec 11, 4PM). To get in, it is necessary that you rsvp’d for the event via meetup.com and then, for the event itself please bring your ID – you will then receive a wristband to get in! Thanks!
Please let us know if you r.s.v.p’d but can no longer make it. Our events usually fill up quickly and we then keep a waiting list. Thank you!
COME JOIN OUR COMMUNITY
Are you passionate about a special UX / Design topic? Why not give a short talk on that one of our next years events? It’s a great place to get started and train your speaking skills ia a very welcoming environment! If so or if would be interested in hosting one of our upcoming events please make sure to reach out
I am fed up with the way we are all being frogmarched towards driverless cars. What I really want is carless cities.
— Owen Barder (@owenbarder) November 19, 2017
Machine learning is all around us – on our phones, social networks, speech recognition systems and more – but what is it exactly, how does it work and why is it useful? And what else will it be doing for us in the future?
Wonderful video by OxfordSparks.
Yesterday our Ladies that UX Berlin community celebrated its first anniversary (and it was awesome!). Time to reflect a question which often occurs: Why do we “separate” women from men?
Here is one explanation based on social psychology, namely: stereotypes.
A bad error message is like trying to order dinner from a waiter who says, “Food cannot be ordered,” then stares silently at you.
— Kim Goodwin (@kimgoodwin) November 14, 2017
“[…] After all, fitting in as a machine-readable cog into the database of ideas gets you a faster start. But it’s also the best way to be ignored, because you’ve chosen to be one of the many […]”
“Wir sind permanent am Scrollen und checken, ob jemand seinen Status upgedatet hat. Problem: Das Design vieler Apps befeuert bewusst diese Sucht nach Information. Hauptsache, die Verweildauer stimmt. Brauchen wir höhere ethische Maßstäbe beim Design von Apps & Co.?”
Ich habe selbst etwas ähnliches vor einigen Jahren erlebt, und das hat mich tatsächlich einigermassen erschreckt: Irgendwo auf einem Werbebanner tauchte wie aus dem Nichts ein roter Punkt auf, ähnlich dem, der erscheint, sobald man bei Facebook eine Notification erhält.“Huch eine Message, schnell gucken, was es Neues gibt” meldete mein Gehirn irrsinnigerweise (es gab ja keine Message!). Es ist übrigens nachweislich belegt, dass Dopamin (ein Neurotransmitter, im Alltag gerne auch als das “Glückshormon” bezeichnet) bei Notifications ausgeschütet wird. Das heisst, wir – also unser Belohnungszentrum im Gehirn – reagieren wie der Pavlovsche Hund auf die Glocke – zwar nicht wortwörtlich sabbernd, aber annähernd – physikalisch auf einen roten Punkt als auslösenden Reiz, den wir seit ca 10 Jahren kontinuierlich erlernt haben.
Es folgt ein sehr hörenswerter Beitrag darüber, wie Design for Business Nutzer süchtig machen kann. Die Herausforderung, mit der wir als “User Experience Designer” immer wieder konfrontiert werden: Für wen gestalten wir hier eigentlich? Das “Business” oder die Nutzer? Dafür muss man sich klarmachen, dass Geschäftsziele oft ganz weit mit Nutzerzielen auseinanderklaffen. Und das ist übrigens nicht nur ein ethisches Problem – sondern auch eine Vertrauensproblem, welches wiederum enorme Auswirkungen auf die Geschäftsziele haben kann, die man doch so gerne erreichen möchte – und zwar negative, aber das ist nochmals ein ganz eigenes Thema, wie man hier ansatzweise nachlesen kann.
Die Kernfrage, die bleibt ist: Wie bringen wir die geforderten Geschäfts- und Nutzerziele zusammen, und zwar so, dass es ethisch vertretbar ist und wir somit beim Human centred Design bleiben, für welches wir uns als (User) Experience Designer doch angeblich so einsetzen.
Edit: Noch ganz andere Themen sind übrigens wie “user generated content” soziale Vergleichsprozesse befeuert, die wir benötigen um unser SELBST – also unsere soziale Identität zu finden, in der Folge depressive Verstimmungen bis handfeste Depressionen zunehmen können, weil: “Mann ist mein Leben scheisse gegenüber Merkmalsträger x, y und z”; der Einfluss des “sozialen Kapitals” auf das eigene Leben – vor allem bei Kindern & Jugendlichen, die oft noch keine gefestigte Persönlichkeit aufweisen (Stichwort: Bullying & Ausgrenzung) – aber auch bei älteren Menschen, die keinen Zugang zu den Vorteilen von sozialen Netzen im Web haben (Tech Barriere) – diese aber benötigen könnten, sowie die Auswirkungen auf Berufswünsche und fatale Verwechslungen (“Mama, ich will Influenza werden”).
We introduced Ladies that UX Berlin in November 2016 to our attendees and most of them are regularly at our meetups <3
Ladies that UX Berlin Birthday edition meetup will be Tuesday, November 21 and will be hosted by ResearchGate. The location is Invalidenstr. 115, 10115 Berlin. We will open doors at 6:30 and this time there will be two talks from two members of our orga team: Živilė Markevičiūtė, who works as an UI/UX designer at TIGNUM and she will talk about Empathy in Design. The other talk will be held by Anna-Lena König, Project/Product Manager at Evenly, about 7 aspects that improve the UX of your app.
A big thank you to our lovely community and to our sponsors – especially to ResearchGate for hosting and sponsoring our birthday event. It would not be possible without your support and help.
Make sure to RSVP here, if you are close by! See you there!
Contact me if you have a topic you’re passionate about and would like to present at an upcoming event or would be interested in collaborating or hosting one of our upcoming events
Photo credits of the Nov 2016 meetup: evenly.io
“We see the forest before the trees”
– Navon, D. (1977)
How do we perceive objects? Classic theories of object recognition (so-called feature theories) often claim that we first process specific features/details of something followed by a more general processing. This is not true: There is strong evidence that our visual system is designed that general (or global) processing is typically prior and quicker compared to detailed (or local) processing. For example: Generally, words are recognized before its individual letters. However, this effect does not always occur: it could be manipulated by instructions to focus either on global or local items more – or simply by putting smaller items (local items like for example letters of a word) of an object further apart or make them bigger.
This coarse-to-fine way of visual perception is also supported by neuroscience. Several studies (Musel et al., 2012; Flevaris et al.,2014; Livingstone, 2000) found that visual processing develops over time – even if it seems instantaneous to us. In the following video neurobiologist, Dr. Margaret Livingstone demonstrates that a focus on so-called spatial frequencies (it claims that the visual cortex operates on a code of spatial frequency, not on the code of straight edges and lines) could help to explain why the smile of the Mona Lisa is so elusive.
A screenshot from the video shows the steps on how we perceive objects based on the theory: Very low spatial frequencies (left) –> coarse or global processing, low spatial frequencies (centre) and high spatial frequencies (on the right) for detailed, local processing
Implications on design:
So, why could these findings be interesting for (interface) design? Findings like this can help us to focus on simplicity in e.g icon design where people often will have a much harder time to recognize an icon when it is designed with too many details, which may be perceived as distracting or even somehow unpleasant simply because it takes longer for humans to process the details (= more cognitive work to do).
It may be also important for the topic of visual hierarchy or: the arrangements of all the elements in a design which makes sure that one can find the way to the information needed and which separates important from not-so-important information.
Navon, D. (1977). Forest before trees: the precedence of global features in visual perception. Cogn. Psychol. 9, 353–383.
Hegdé, J. (2008). Time course of visual perception: Coarse-to-fine processing and beyond. Progress in Neurobiology, 84, 405–439.
Flevaris, A.V., Martinez, A. & Hillyard, S.A. (2014). Attending to global versus local stimulus fea- tures modulates neural processing of low versus high spatial frequencies: An analysis with event- related brain potentials. Frontiers in Psychology, 5 (Article 277).
Image source: Forest Wide by Joe Hart; CC BY 2.0
Despite the fact that we know that people normally don’t change their sizes so fast this knowledge – which is normally used to estimate our size and distance perception – is simply overwritten by the cues suggesting the wall is at right angles which creates a visual illusion of parallel walls in the room.
Google AutoDraw is pretty cool!
“AutoDraw is a new kind of drawing tool that pairs the magic of machine learning with drawings from talented artists to help everyone create anything visual, fast.”
Both, top-down and bottom-up processing are the two famous approaches on how we interpret information in cognitive psychology and as humans, we do both.
Please read the sentence below:
Now read again.
Have you skipped over the extra “the” or did you read the line for what it is?
Let’s be honest: We almost all skipped over the extra „the“ and the reason why we do this is that we do use top-down processing, which is one of two ways we are interpreting information about the world.
Top-down or conceptually-driven processing simply means that the processing is hugely influenced by the individual’s expectations which are based on previous knowledge rather than by the stimulus itself. The latter is called bottom-up or data-driven processing which means interpreting the stimulus solely for what it is.
There are many examples out there that we use top-down processing in combination with bottom-up processing – for example, read the following sentence:
„It dseno’t mtaetr in waht oerdr the ltteres in a wrod are, the olny iproamtnt tihng is taht the frsit and lsat ltteer be in the rghit pclae“.
Fun fact, which also proves we are using top-down processing: If your English is weak (or you’re not a native speaker), you might have a much harder time reading this sentence correctly as someone who speaks English fluidly: she could easily read this paragraph without hesitation. If you are German you might fluidly read a similar example in German:
“Gmäeß eneir Sutide eneir elgnihcesn Uvinisterät ist es nchit witihcg, in wlecehr Rneflogheie die Bstachuebn in eneim Wrot snid, das ezniige, was wcthiig ist, ist, dass der estre und der leztte Bstabchue an der ritihcegn Pstoiion snid”
So we see, how we process information is determined on previous knowledge and expectations, and how we process information is also guiding where our attention goes (there are several other ways such as priming and other biases but we’ll omit that by now). For example, there is this famous young woman/old lady illusion which is one of the best examples of perceptual expectancy:
Whether we see an old woman or the young lady is due to our interindividual top-down processing differences.
We can literally say that we perceive what we expect and know —if there is no prior knowledge of something, the tendency to overlook details is rather high because we have no (strong) association with something meaningful to us.
How might this affect Interface Design?
As we did see, perception and information processing is not objective which means simply just because there are certain elements placed somewhere people will see and use those elements.
The way we look for information is not only feature driven (or data-driven which equals bottom-up processing) but also context-driven or expectation-driven which equals top-down processing.
And to make sure that both approaches are met and people can find the things they are looking for it’s important to gain average knowledge in form of basic user research of the people who are using the product, site or app, and their goals and expectations.
Usability tests / evaluating systems are task-based whereas user research is rather explorative – while researching we are exploring the goals, needs and also fears of the people who will use something and gain knowledge about them which will inform the basis of any tool.
“Cognitive models can serve as a substitute for (quantitative) user tests. User models built with ACT-R can simulate the interaction with a certain task. Cognitive modeling has two advantages over real user tests; first of all no human participants are needed when good and evaluated models exist and second, important information about underlying cognitive processes can be discovered. Implications from these findings can then be used in designing further applications.”
Russwinkel, N., & Prezenski, S. (2014). ACT-R meets usability. Or why cognitive modeling is a useful tool to evaluate the usability of smartphone applications. Paper presented at Cognitive 2014: The Sixth International Conference on Advanced Cognitive Technologies and Application, Venice (pp. 62-65).
Huh. So many questions. I guess a computational model generally de-emphasizes or even neglects human emotional and affective factors as any other feelings such as stress, tiredness and their implications on motivational factors. (I think I read somewhere that ACT-R has built in some motivational component?)
I don’t think computational models could substitute humans when it comes to usability evaluation even when the evaluation is only based on efficiency data, but it is definitely a very interesting approach which caught my attention and now I’m curious :)