Values of Silicon Valley

„People across the world still believe that the Californian Ideology expresses the only way forward to the future (…) Its eclectic & contradictory blend of conservative economics & hippie radicalism reflects the history of the West Coast – and not the inevitable future of the rest of the world.“

R. Barbrook, The Californian Ideology (1996)

Missing: Female UX role models

Are their any “famous” women in UX? The level of Don Norman, Luke W, Alan Cooper, and Jared Spool? Where you say their names to other UXers and they know who you’re talking about?

— Katie Swindler (@KatieSwindlerUX) August 19, 2021

… yep this is a very good question and part of the problem aka missing role models for young female interaction designers (surprise: this problem also appears in science) as this perception of a male dominated field will influence their self-concept

„Mind“ statt „MINT“

„[…] schreibt Georgia Nugent, dass es eine schreckliche Ironie ist, dass wir genau in dem Moment, in dem die Welt immer komplexer wird, junge Menschen ermutigen, enorm spezialisiert zu sein. Während Technologie zu einer immer einfacher zu bedienenden Toolbox wird, läge der wahre Vorteil doch in einer Kombination mit Geistes – und Sozialwissenschaften.“

Fantastischer Beitrag:

Alan Cooper at interaction18

Worth watching.

„We need to stand up, and stand together. Not in opposition but as a light shining in a dark room. Because if we don’t, we stand to lose everything. We need to harness our technology for good and prevent it from devouring us. I want you to understand the risks and know the inflection points. I want you to use your agency to sustain a dialogue with your colleagues. To work collectively and relentlessly.“

Alan Cooper – The Oppenheimer Moment from Interaction Design Association on Vimeo.


„[…] the trouble begins with the word user – there are only two industries that call their customers users: illegal drug dealers and software houses“

– Edward Tufte

This quote is unfortunately (and only) very relevant when we take a look at the business models of certain Internet businesses.
Unfortunately, I didn’t find the direct source where he said it – if someone knows? It might be a talk he gave?

Carmen Hermosillo (humdog) über die Kommerzialisierung des Selbst

„i have seen many people spill their guts on-line, and i did so myself until, at last, i began to see that i had commodified myself. commodification means that you turn something into a product which has a money-value. in the nineteenth century, commodities were made in factories, which karl marx called “the means of production.” capitalists were people who owned the means of production, and the commodities were made by workers who were mostly exploited. i created my interior thoughts as a means of production for the corporation that owned the board i was posting to, and that commodity was being sold to other commodity/consumer entities as entertainment. that means that i sold my soul like a tennis shoe and i derived no profit from the sale of my soul. people who post frequently on boards appear to know that they are factory equipment and tennis shoes, and sometimes trade sends and email about how their contributions are not appreciated by management.“

Aus: pandora’s vox: on community in cyberspace (1994)

Follow a leader – übers „Nudging“

Ich gehe heute Abend auf eine Veranstaltung wo das Thema Nudging diskutiert wird und bin schon seeehr gespannt. Ich hoffe, mein *Confirmation Bias* steht mir nicht allzusehr im Weg, denn meine Position zu „sanften Stupsern“ oder dem „libertären Paternalismus“ – wie Nudging auch genannt wird – ist sehr verfestigt und lässt sich einem Zitat von unserem alten Kumpel Kant ganz gut ausdrücken:

„Es ist so bequem, unmündig zu sein. Habe ich ein Buch, das für mich Verstand hat, einen Seelsorger, der für mich Gewissen hat, einen Arzt, der für mich die Diät beurteilt, u.s.w., so brauche ich mich ja nicht selbst zu bemühen. Ich habe nicht nötig zu denken, wenn ich nur bezahlen kann; andere werden das verdrießliche Geschäft schon für mich übernehmen.“

„Sapere aude!“ – habe den Mut, zu Wissen.

Aus: Immanuel Kant, Was ist Aufklärung

Orga culture and UX: Competition vs common vision


Image: Cooperation, exemplary image. ;)

The famous robbers cave field experiment conducted by Muzafer Sherif (1954, 1958, 1961) investigated how and why group conflicts occur.

Sherif argued that conflict between groups (intergroup conflicts) occur in case of two groups are in competition for limited resources (like e.g recognition etc)

The research group arranged an artificial competitional environment (which does not necessarily reflect real life conditions) where friction conflict and frustration between the groups was likely to occur.It didn’t take long and the predictions of the researchers came true, the two groups had become strong rivals and behaved hostile to each other. The conflicts only subsided as the researchers began to create situations in which the opposing groups had to solve problems together, thus creating a common goal and vision to achieve these goals.

Sherifs studies could teach us a little bit about organizational conflicts like eg silo thinking, political and ego driven processes, etc – which we are often confronted with in our work as experience designers and which have a direct impact on product development and at the end of the day: the users’s experience. The experiemnts also emphasize how important it is to have common goals and therefore a common vision.

The study was and is ethically questionable and was also biased. Nevertheless I think it can teach us about how intergroup conflicts occur, what to do against it – also in organizational structures and teams.

Sherif, M. (1954). Experimental study of positive and negative intergroup attitudes between experimentally produced groups: robbers cave study.

Positive framing – or: what would Kahneman say?

The usage of positive framing which is used to explain why face recognition makes sense on Facebook and should be kept turned on (its default setting is „on“) is a dark pattern deluxe.

FB only mentions wonderful, non-selfish reasons why you should keep that feature turned on, such as „help people with visual impairments by telling them who’s in a photo or video „, „help protect you from strangers using a photo of you as their profile picture „, „and let you know when you might appear in photos or videos, but haven’t been tagged

They do not mention, in any way, that it could and will be used for identifying you or show you targeted advertising based on emotional states (which only shows that there is a „hidden agenda“ the organization follows (it’s dishonest) which leads to mistrust.

GDPR is good for user experience.

„In short, GDPR will make privacy a mandatory design principle–and, in doing so, may redefine the profession.“

Great article on why designers should care about the GDPR.  Because  it makes us rembember that  we are designing products for people not data/numbers  –  and that businesses should have the ultimate goal to treat their users/customers well and with respect.

Read here:

Computers are social actors, again and again

"Our acceptance of seemingly autonomous voice assistants will depend on trust. And trust demands being able to distinguish when we’re talking to a human and when we’re talking to an AI.”

— Anna Dahlström (@annadahlstrom) May 12, 2018

See also:

About Intelligence, meaning, the mind and machines

The video above contains a very short introduction to the chinese room argument, introduced by philosopher John Searle in the 1980 in his essay „Minds, Brains and Programs“. and it fits to my last post about Googles Duplex natural speech pattern.

The chinese room argument is a thought experiment with whose help Searle (who is btw deep into the philosophy of mind) wanted to prove that it is not enough for a computer to pass the Turing test in order to be considered intelligent. The Turing test was developed by Alan Turing in the 50s as a definition of intelligence where Turing claimed that if one cannot distinguish the answers of a computer from the answers of a person, this computer could be regarded as „intelligent“. So, passing the Turing test is therefore not a sufficient criterion for so-called „strong artificial intelligence“. In addition, it questions the computational theories of the mind and the question if machines are able to think.

This is a huge and interesting topic and it is deeply related to intentionality in human beings and the body-mind problem in philosophy which is still „unsolved“. I think this is the most fascinating topic I have ever been introduced to since I study psychology.

Minds, Brains and Programs by John Searle:
About the chinese room argument:

Who do you want to talk to: People or machines?

„[…] After all, fitting in as a machine-readable cog into the database of ideas gets you a faster start. But it’s also the best way to be ignored, because you’ve chosen to be one of the many […]“

He is so right.

Nur nochmal 2 min Instagram checken…. oder: Pavlov’s dog.


„Wir sind permanent am Scrollen und checken, ob jemand seinen Status upgedatet hat. Problem: Das Design vieler Apps befeuert bewusst diese Sucht nach Information. Hauptsache, die Verweildauer stimmt. Brauchen wir höhere ethische Maßstäbe beim Design von Apps & Co.?“

Ich habe selbst etwas ähnliches vor einigen Jahren erlebt, und das hat mich tatsächlich einigermassen erschreckt: Irgendwo auf einem Werbebanner tauchte wie aus dem Nichts ein roter Punkt auf, ähnlich dem, der erscheint, sobald man bei Facebook eine Notification erhält.„Huch eine Message, schnell gucken, was es Neues gibt“ meldete mein Gehirn irrsinnigerweise (es gab ja keine Message!). Es ist übrigens nachweislich belegt, dass Dopamin (ein Neurotransmitter, im Alltag gerne auch als das „Glückshormon“ bezeichnet) bei Notifications ausgeschütet wird. Das heisst, wir – also unser Belohnungszentrum im Gehirn – reagieren wie der Pavlovsche Hund auf die Glocke – zwar nicht wortwörtlich sabbernd, aber annähernd – physikalisch auf einen roten Punkt als auslösenden Reiz, den wir seit ca 10 Jahren kontinuierlich erlernt haben.

Es folgt ein sehr hörenswerter Beitrag darüber, wie Design for Business Nutzer süchtig machen kann. Die Herausforderung, mit der wir als „User Experience Designer“ immer wieder konfrontiert werden: Für wen gestalten wir hier eigentlich? Das „Business“ oder die Nutzer? Dafür muss man sich klarmachen, dass Geschäftsziele oft ganz weit mit Nutzerzielen auseinanderklaffen. Und das ist übrigens nicht nur ein ethisches Problem – sondern auch eine Vertrauensproblem, welches wiederum enorme Auswirkungen auf die Geschäftsziele haben kann, die man doch so gerne erreichen möchte – und zwar negative, aber das ist nochmals ein ganz eigenes Thema, wie man hier ansatzweise nachlesen kann.

Die Kernfrage, die bleibt ist: Wie bringen wir die geforderten Geschäfts- und Nutzerziele zusammen, und zwar so, dass es ethisch vertretbar ist und wir somit beim Human centred Design bleiben, für welches wir uns als (User) Experience Designer doch angeblich so einsetzen.

Edit: Noch ganz andere Themen sind übrigens wie „user generated content“ soziale Vergleichsprozesse befeuert, die wir benötigen um unser SELBST – also unsere soziale Identität zu finden, in der Folge depressive Verstimmungen bis handfeste Depressionen zunehmen können, weil: „Mann ist mein Leben scheisse gegenüber Merkmalsträger x, y und z“; der Einfluss des „sozialen Kapitals“ auf das eigene Leben – vor allem bei Kindern & Jugendlichen, die oft noch keine gefestigte Persönlichkeit aufweisen (Stichwort: Bullying & Ausgrenzung) – aber auch bei älteren Menschen, die keinen Zugang zu den Vorteilen von sozialen Netzen im Web haben (Tech Barriere) – diese aber benötigen könnten, sowie die Auswirkungen auf Berufswünsche und fatale Verwechslungen („Mama, ich will Influenza werden“).

GDPR Cookie Consent mit Real Cookie Banner