The Uncanny Valley of Personalisation

The Uncanny Valley of Personalisation

There was a lot of stuff I took out of my book. Not because it was crap. Well, it might be. You can be the judge of that. No, I took it out purely because it didn’t flow with my childish topics of knights and dragons.

This is one of the topics I took out. The uncanny valley of personalisation. Creepiness. When is personalisation too much personalisation?

IRL research suggests that it’s surprisingly a bloody popular topic and I neglected it like an abandoned child at a Disney theme park (that’s never happened to me before by the way; I’m not harbouring contempt) Amongst all the questions I get asked at events, I would say this comes a close second behind “how will AI impact personalisation.”

I re-wrote the below and made it more succinct as it was sitting at about 5,000 words. Believe or not this is the short version. Let’s dive on in.


What is the uncanny valley?

Most of us have heard about the uncanny valley. We’ve usually associated it in relation to faces, cartoons or visual effects. It’s an unsettling feeling in response to something that’s not quite right about a humanoid robot or life-like computer generated characters.

It’s a phenomenon that was created in 1970 by the Japanese professor of robotics - Masahiro Mori. He called it “bukimi no tani genshō”, which was subsequently translated into English as the ‘uncanny valley’. I prefer “Bukimi-no-tani” personally.

We can take an axis of familiarity and human likeness, and we find that as human likeness increases, so too should familiarity. For example, we are familiar that Mickey Mouse is definitely not human because the human likeness is not there (high familiarity, low human likeness). We are also aware that Walt Disney himself was a real human so the both the human likeness and familiarity are high. However, when something lies in between - a zombie, a prosthetic hand or the AI reconstruction of princess Leia from Rogue One - it has a high human likeness but low levels of familiarity that’s when it lies in the uncanny valley.

No alt text provided for this image

There’s something that’s not quite right about them. That unsettling feeling.

In 1988, Pixar created a short film called “Tin Toy” (I only know of Disney references, sorry). The baby, Billy, terrorised the toys and subsequently the audience reaction was overly negative. Not because the baby was mean, but because it had too uncannily realistic human features - but low levels of familiarity. This reaction is what led Pixar down a path of creating characters that were toys, cars or fish, and any human characters (Incredibles, Soul, Inside Out) had cartoon elements to their features such as big eyes, exaggerated face shapes etc to ensure the character was clearly distinct between human and not.


How does this relate to personalisation?

It's a stretch. I just like talking about Disney characters. But when personalisation is so advanced, familiarity can drop significantly.

Ryan Nguyen speaks about this when he was searching for an electric car whilst in Vietnam. Toyota (check) targeted him with a Prius (check) in Vietnamese (check) with familiar local imagery and language (check). The believability that this ad was made in Vietnam, his home country, was there. However, he noticed that the accent of the narrator seemed a little off. Almost American. This 1% destroyed the believability, and therefore trust, of everything that preceeded it because it fell into his uncanny valley - high likeness, low familiarity.

Why did Ryan feel so uneasy?

  1. His feelings
  2. What data is collected
  3. How his data was communicated


His feelings

Creepiness doesn’t really sound like a word. I did look up to see when it was added to the dictionary but only found origins of the word “creepy”, originally a Charles Dickens coined term in David Copperfield.

It’s a feeling. A negative emotional response related to an uneasiness. And because it’s a feeling it is both ambiguous and hard to define. It has subjectivity dragged along behind it where, what might be creepy to one person, may not be to another.

Not only that, but one’s perception of creepiness can evolve over time. For example, we reevaluate our privacy expectations as we become more educated on the matter [Adam Theirer, 2013]. Like how Mark Zuckerberg gets dragged into court to explain how data is used and shared, and how the whole world watches, simultaneously whilst new privacy laws are being created and enforced. This changes our environment and therefore our perception.

Within the realms of personalisation, that feeling of creepiness is often one where we realise the extent to which we’re being tracked. Which then makes us think how our data has been collected. And if we believe that it is a lot of data, or it is considered “too far”, then we have that subjective feeling of uneasiness.

We’ve all had that feeling of talking about, say, dogs to our significant other in a private conversation, only to be remarketed dog food or dog toys online the next day. Was Alexa listening?

Perhaps. Amazon have recently been ordered to pay $25 million for keeping kids’ Alexa voice recordings forever and violating childrens privacy laws (Source). But according to their FAQ page the answer is no. “So, is Alexa listening? No, but Alexa is always ready to help” (Source). It’s an obscure sentence that feels as though its been dreamt up by the marketing team.


What data is collected

Part of the reason we have subjective interpretations of “what is creepy”, is we also have individual feelings of what we consider as “how much” and “what” data is too far.

Sometimes it’s no data whatsoever. BCG and Google joint research on consumer privacy and preferences saw that 45% of consumers are uncomfortable with sharing their data to create personalised ads - period.

No alt text provided for this image

Sometimes its only certain data we’re willing to exchange. In the same report, low level demographic information like gender or location were amongst the data most willing to be shared. But more private data like income and social media activity ranked the least willing to be shared at 11% respectively. What it is interesting with this data is the level of perceived inherent bias. Email address is extremely high in what is willing to be shared at 28% agreeableness despite that being a form of personal identifiable information, and something often used with scamming techniques. Are we as a society so used to being asked for this information that our agreeableness has increased over time?

No alt text provided for this image

The same applies with the adverse; where recorded conversations are listed as an option on this list analysed by BCG and, to probably all our surprise, there is a 5% level of acceptance of sharing this data. How? Why? Perhaps it’s the general acceptance by society that Alexa is actually listening like the above? Or, indeed, our expectations of such brands to know us so well we accept and willingly give up such information.


How his data was communicated

Studies as way back as 1993 by Johnson and Johnson pointed out that explanations play a crucial role in the interaction between users and complex systems.

If we look at the most popular form of personalisation - recommendations - I can’t think of anything more complex. Despite this, they are black boxed, shrouded in mystery. Why?

It’s been suggested that this opacity is as a result of protecting intellectual property from competition. I don’t buy that. Other, more cynical beliefs, explain that this omission is intentional purely because they don’t work that well (Doc Searls, 2012). I could buy that. Largely though, it’s a belief that recommendations make user lives easier - their interaction is effortless and friction-free - and by explaining the practices would overly-complicate the matter.

Making a complex concept more accessible is not a bad thing. That’s not the problem. The problem is the imbalance of being simple enough without communicating their inner workings to users. In one of my favourite books, “The Black Box Society”, Frank Pasquale’s message isn’t so much that the algorithms themselves are bad, it’s their secret nature that are or might be bad.

It raises the question: should we be more transparent with users on how personalisation works?

Research suggests yes; although like any marketing practice, to a point. (I tried really hard not to use the phrase “It depends”)

Telling users exactly what data we’re collecting and how that translates to what the user is currently seeing is absolutely a good thing. Nielsen Norman Group, famed for user experience courses and research, recommend clearly stating the source of data, suggesting that “it adds credibility” and “helps users gauge the type of content included.” Amazon Video for example have a section “based on titles you have watched and more”. What does “and more” mean? That feels feels shoehorned in there by marketing copywriters. When the number one rule of copy is to be succinct and delete words, “and more” adds ambiguity.

Academic studies by Sinha and Swearingen (2002), Herlocker et al (2000), Pu and Chen (2007) and Wang and Benbasat (2007) found that “recommender systems incorporating explanations produced more user acceptance of, confidence in and trust in recommendations than those without explanations” (Eslami et al, 2018).

One paper, for example, found that users like and feel more confident in recommendations that are perceived as transparent (Sinha and Swearingen, 2002). Where, “a good algorithm that generates accurate recommendations is not enough to constitute a useful system. The system needs to convey to the user its inner logic and why a particular recommendation is suitable for them”. The use of the word perceived is specific in this case, as Sinha and Swearingen are emphasising the communication of the system is as important, if not more so, as the contents.

We can’t predict how our users might feel. That subjective uneasiness of creepiness is a spectrum and where some might feel as though the platform has looked at their email, others may feel that there is just something not right. The disparity in responses on this spectrum clearly highlighted a need for balanced transparency. Too vague and we have a feeling of uneasiness and inherent mistrust. Too transparent and we feel a sense of creepiness realising the extent of the tracking; almost like addressing a concern that’s not yet a concern.


Conclusion

My conclusions are drawn from the fact that the uncanny valley stems from a lack of familiarity. An unfamiliarity of where the data came from (collected) and how it was used (communicated).

As the subjects of data privacy come more to the forefront of society, usually by the naughty and illegal activities from the FAANGS of the world, these feelings are only going to heighten. The intelligence of AI and Chat GPT gives the perception that algorithms are all-powerful and all-knowing. On occasion, they have been described as being part of the “eye of providence” folk theory (Eslami et al, 2016) You’ve seen it before - the eye set within the triangle. Is it the Freemasons? Or the Illuminati? Or is it just a symbol on the back of the one dollar bill? These theories are often related to conspiracy theories and, as such, mistrust.

At a time when mistrust is at its highest, yet, simultaneously, transparency is at it’s lowest, the only thing I can recommend is implementing a bit more care into our experiences. We need to give some control back to the customer. Because of this, I always recommend as much transparency as possible. From experience, by telling people why a recommendation is the way it is, or how data was used can only increase familiarity and trust. When you’re trying to be too clever is often when this falls down.

TLDR; just don’t collect stupid shit that will scare people…



Sources

  • Rashmi Sinha and Kirsten Swearingen. 2002 the role of transparency and recommend systems in Chi’O2 extended abstracts of human factors in computer systems ACM 830 to 831
  • Pearl Pu and Li Chen. 2007. Trust-inspiring explanation interfaces for recommender systems. Knowledge-based Systems 20, 6 (2007), 542-556
  • Jonathan L Herlocker Joseph A Konstan and John Riedl 2000 explaining collaborative filtering recommendations in proceedings of the 2000 ACM conference on computer supported cooperative work ACM to 241 to 250
  • Motahhare Eslami, Karrie Karahaloios, Christian Sandvig, Kristen Vaccaro, Aimee Rickman, Kevin Hamilton and Alex Kirlik. 2016. “First I “like” it, then I hide it: Folk Theories of Social Feeds. In Porceedings of hte 2016 CHI Conference on Human Factors in Computer Systems. ACM, 2371-2382
  • Weiquan Wang and Izak Benbasat, 2007. Recommednation agents for electronic commerce; Effects of explanation facilities on trusting beliefs. Journal of Management Information Systems 23, 4 (2007), 217-246

Absolutely on point! Transparency in data handling sparks genuine connections rather than unease 🌟 As Steve Jobs once remarked - The only way to do great work is to love what you do. In applying this to personalization, it's about balancing innovation with empathy. 💖 Let's aim for personalization that cares and understands. 🚀

Like
Reply
Shruti Rai

Marketing Lead @Cooee | Ex Publicis | Ex Ogilvy | Ex HCL

1y

Such a well-written draft, explaining a complex topic with such finesse. Kudos to you David Mannheim

Like
Reply
Alex Philpott

Senior Sales Director, UK&I at Dynamic Yield by Mastercard

1y

Where do I buy your book from?? Love the way you write, uncanny valley is brilliant. Super well researched 👏🏽👌🏽

Like
Reply
Rasmus Houlind

Keynote Speaker and Author within Personalization and Omnichannel Marketing, CXO at Agillic, Board Member and Angel Investor

1y

Love it! Creepiness is subjective. But there’s often a correlation between whether the personalization is built on data that the subject realizes they’ve shared. And that correlates with data literacy. Also worth considering whether the topic addressed is a natural part of the relationship. To my experience the thing that creeps people out the most is mindless and repetitive retargeting…

Like
Reply
Martin P.

Speero—We help companies grow revenue with data & experimentation

1y

Great post David Mannheim. I've never seen such good breakdown of 'uncanny valey' principle.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics