Is your cute looking, nice sounding assistant (or: robot/chatbot) really that cute as it seems to be?
This article on FastCoDesign gives some food for thought.
Subtle social cues, which are associated with cuteness, friendliness and so forth – originally designed with good intentions for giving you a „better experience“ – may fool you because lack of transparency and can so lead to queasy feelings which are based on cognitive dissonance.
[…]“These new affordances don’t show users what they can do with a technology, they describe what a technology won’t do to users: They won’t hurt us, they won’t spy on us, they won’t reveal our secrets. They are literally user-friendly. Yet could there be hidden costs for users when AI acts like a friend? […] We need design that doesn’t just show people how to use technology, but shows how their technology is using them.“
Photo: My sweet, little,friendly and old beatbot „keepon“ which for sure does not spy on me (it is developed for kids with autism syndroms) Isn’t it cute? :)