Alan Cooper at interaction18

Worth watching.

„We need to stand up, and stand together. Not in opposition but as a light shining in a dark room. Because if we don’t, we stand to lose everything. We need to harness our technology for good and prevent it from devouring us. I want you to understand the risks and know the inflection points. I want you to use your agency to sustain a dialogue with your colleagues. To work collectively and relentlessly.“

Alan Cooper – The Oppenheimer Moment from Interaction Design Association on Vimeo.

Human-Computer Interfaces – Xerox Star GUI

The Xerox Star was an early commercial computer introduced in 1981. It was the first computer mimicking the office paradigm to make computer operation more easy for people by using icons representing familiar office objects like the „document“ icon, folder icon and a two-button mouse, many of them later have become standard in personal desktop computers, like in 1983 the Apple Lisa did. Before, in 1973 the Xerox Alto introduced the first graphical user interface ever.

Evolution of the document icon shape
The Evolution of the document icon shape in Xerox Star.
Image: Wikipeda

UX copy: Using plain language is good for your business

It’s really nearly all about perceived intentions towards a person, you can break things down to this basic thing when it comes to communication/interaction which includes also brand perception.
Don’t use jargon or internal terminology no one except insiders will understand when it comes to copywriting for your website. Good, clear, easy to read copy is good for your audience and therefore it is good for your business – simply because simple, plain language means your site, app or product is talking „human“.

It increases a) the readability and also b) – which is really important – the credibility because the usage of plain language is perceived as being transparent towards others and not hiding behind a hard to understand terminology which may be perceived as a sign to be more skeptic towards you, thinking even you may have bad intentions towards your users. („What are they trying hiding from me? / Don’t they want me to understand?“ )

Watch a good video about this topic here:https://www.youtube.com/watch?v=bAvW1A7UiYM

About Intelligence, meaning, the mind and machines

The video above contains a very short introduction to the chinese room argument, introduced by philosopher John Searle in the 1980 in his essay „Minds, Brains and Programs“. and it fits to my last post about Googles Duplex natural speech pattern.

The chinese room argument is a thought experiment with whose help Searle (who is btw deep into the philosophy of mind) wanted to prove that it is not enough for a computer to pass the Turing test in order to be considered intelligent. The Turing test was developed by Alan Turing in the 50s as a definition of intelligence where Turing claimed that if one cannot distinguish the answers of a computer from the answers of a person, this computer could be regarded as „intelligent“. So, passing the Turing test is therefore not a sufficient criterion for so-called „strong artificial intelligence“. In addition, it questions the computational theories of the mind and the question if machines are able to think.

This is a huge and interesting topic and it is deeply related to intentionality in human beings and the body-mind problem in philosophy which is still „unsolved“. I think this is the most fascinating topic I have ever been introduced to since I study psychology.

Minds, Brains and Programs by John Searle: http://cogprints.org/7150/1/10.1.1.83.5248.pdf
About the chinese room argument: https://www.iep.utm.edu/chineser/

Google Duplex‘ natural speech pattern: It’s a human? No it’s not!

I think we now go really deep into the uncanny valley. I think it is of psychological and also sociological relevance that we know if we speak to another human or a machine.The point is, it may be perceived as cheating, playing with trust.

UX Priorities? Check your Analytics Data!

Good explanation on why checking you analytics data is important before you decide on UX priorities: You can check if previously observed behavior occurs frequently for many visitors, so if eg usability evaluation observations are truly indicative of your entire audience and it also helps in avoiding opinion based design decisions on what to do next but rather base your UX priorities on data:

Good „UX“ = bringing delight

Well, maybe I’m a little bit late to this 2014 party – but hell yes, look at this! Imagine you are in the dressing room/cabin of a clothing store and something does not fit and you want to try different sizes – leaving half naked or desperately shouting from behind the curtain is no longer a thing with this mirror! You can even order drinks while celebrating your private little shopping experience! Awesome!
This company has clearly understood what it means to bring delight to your customers and so build and grow customer loyalty and to set yourself apart from your competitors.

Also, this tweet:

“The #HCI #research community still focuses on #usability rather than adding #UX measures of delight, satisfaction, engagement.” Prof. Marc Hassenzahl, University of Siegen, Germany, at #IsraHCI 2018. pic.twitter.com/cYIai8q2mo

— Shuli Gilutz, Ph.D. (@ShuliGilutz) January 4, 2018

John McCarthy on AI

Stumbled upon this goodie while researching on cognitive architectures for a course.

„(…) a machine isn’t the sum of its parts, if somebody took a car apart and gave you a heap of the parts that wouldn’t be a car – they have to be connected in a specified way and interacting in a specified way, and so, if you want to say that the mind is a structure composed of parts interacting in a specialized way I would agree with that, but it isn’t just a heap of them“

On cognitive dissonance and rewards

„We try to reduce the dissonance between how we think we should act and how we actually act by changing one or the other“

This is interesting because it is contrary to incentive / economic theories which claim that the higher the reward will be the more likely people will change their mind – but only if there is a mismatch between my internal attitudes, values an or core beliefs and how I actually act.

The video shows excerpts of a classic experiment in social psychology conducted by Leon Festinger and James M Carlsmith in 1959 which is called „Cognitive Consequences of Forced Compliance“. Forced compliance is very closely related to the theory of cognitive dissonance which states that there will be the mental discomfort (psychological stress) experienced by a person who simultaneously holds two or more contradictory beliefs, ideas, or values. This, in turn, is related to one of the main principles in Gestalt Theory: The principle of good form.

What’s your long-termed product vision?

In 1987 Apple had already mapped out a vision which way they want to go. Look at that: It includes today’s digital assistant (like Siri but way more smarter) screen sharing concepts and a rough version of something like the iPad. It’s important to map out where you want to go because it might extremely helpful to focus and go that way.

What’s your long termed product vision? What would an article 5 years from now say about your product?

Are we in control of our decisions?

„We wake up in the morning and we open the closet; we feel that we decide what to wear. we open the refrigerator and we feel that we decide what to eat. What this is actually saying, is that many of these decisions are not residing within us. They are residing in the person who is designing that form. When you walk into the DMV, the person who designed the form will have a huge influence on what you’ll end up doing.“

Psychologist Dan Ariely talks about the irrationality in our decision making if we are not sure about something and why we are not the super-rational cost/benefit animal (aka Homo Oeconomicus) who is deciding by weighing up cost/benefit the rational way.

Below there are two screenshots which are taken from the talk above. The first one shows three choices. In an experiment, they put away the option nobody wanted. We can clearly see how the decision making then did change based on this little detail. This and the topic of the whole talk links very well to the concept of loss aversion – which comes from Prospect Theory – by the famous psychologists Kahneman and Tversky.

Don’t design for silos: Why Scenarios and storyboards make sense in UX

Or: The problem with so called „agile“ user stories and their corresponding narrowed-down solution views. Great talk on scenarios by Kim Goodwin.

„Real people go just go „log in“ they are trying to accomplish something, they have some other goal in mind, and they are going somewhere after they log in, and they have been somewhere before they log in“

Webstock '17: Kim Goodwin – Scenarios and storyboards: Getting to structure and flow from Webstock on Vimeo.

Presenting Low-Fidelity Prototypes to Stakeholders

A lot of designers feel pressure to present full blown design comps or deliverables when we are still working through problems of content, information architecture or unsolved interaction design issues. Page Laubheimer with some tips how to work together on low fidelity and messy designs.

Who Do We Become When We Talk to Machines?

„Can a broken robot break a child?“

Awesome Sherry Turkle about conversation in the digital age and how replacing face-to-face communication with smartphones (or even growing up in company with digital assistants, smart toys, and bots and connect emotionally with them) may diminishing people’s capacity for empathy.

How to spot „Carol Beer UX“

Carol Beer, the super-friendly receptionist from the British series „Little Britain“, which is famous for her „Computer says no“ attitude is the perfect example of someone you don’t want to have to represent your company.
So why does so many software behave like her?

this is what #carolbeerux looks like (note: this is how many software behaves towards people) pic.twitter.com/8OU5ddtky7

— steffi (@guerillagirl_) 13. Mai 2017

How to spot Carol Beer UX:

(please note: this list may be incomplete, just to give an expression)

1) It is not polite
Think of the thing you interact with as a person. Is it likable? Does it communicate in a clear way?
Unpolite software, for example, communicates in machine language – perhaps picked with some nice boolean expressions, only hardcore developers will understand. Also, unpolite software behaves often rudely by not giving any feedback on what is going on. (All of the following points are clear expressions of unpolite software coming in different manifestations)

2) It is selfish
It does not care about you and your needs and it is not cooperative in supporting you to fulfill your goals: This means it reflects its own needs or business needs over users/peoples goals and needs. Attention begging and useless notifications, modal windows or other things that interrupt you constantly during your work and focus stealers are a good example of this behavior.

3) It has bad intentions towards you
This is the increased form of Nr #2. It is also known as dark patterns, like not giving you the possibility to leave/cancel something or forcing you to subscribe to something (e.g by providing an irritating/ misleading interaction which „traps“ you into something you didn’t want)

And last but not least: It coughs at you when you want to leave.

Some links:
Computers as social actors
Grice’s politeness maxims

Seen something behaving Carol Beer-esk? Tag it with #carolbeerux

GDPR Cookie Consent mit Real Cookie Banner