It probably happens to you every business day.
In your inbox – or hopefully sorted out of your inbox – will be an email from someone you’ve never heard of. Maybe you’ve heard of the company they work for, but you’ve never met them.
But somehow, they write as if they’re a long-lost friend. They know where you went to school, what interests you have, who your favorite football team is, or that the weather in your neck of the woods has been rainy this spring.
AI has written that email to create familiarity where no shared history exists.
The problem isn’t the personal details. A lot of them are out there in your LinkedIn bio or on public social media posts.
The problem is that the AI-generated, faux personal email speaks from a level of familiarity it hasn’t earned.
You immediately sense that something is a little off.
The Uncanny Valley
Japanese robotics professor Masahiro Mori introduced the concept in 1970. As robots got more and more humanlike, he observed that they would eventually reach the stage at which their almost perfect resemblance to humans would change our affinity to them to revulsion.
He illustrated this with a graph: as something becomes more humanlike, our affinity rises – until it suddenly drops when it feels almost, but not quite, human.
Mori’s term for this in Japanese was bukimi no tani which was translated into “uncanny valley” a few years later.

The Example That Made the Term Famous
The 2004 animated movie The Polar Express became the example for this phenomenon. The motion-capture CGI that was used was technically advanced for that era. The characters were nearly human – but not quite.
Eyes lacked depth. Emotional cues felt missing. Movements seemed subtly robotic.
Realism without life feels unsettling. The closer something signals “human,” the more sensitive we become to what’s missing.
This was the uncanny valley on the big screen in theaters nationwide.
My Encounter with the Uncanny Valley via Email
I described this in detail in this website’s anchor essay. (If you haven’t read it, it’s the place to start on this site.)
I received an email from a financial tech company from a sales development representative mentioning my alma mater and asking if I participated in an annual event there.
I wrote back telling him it was after my time there that the event started, and didn’t think much of it. I figured that he had looked at my LinkedIn and decided to ask.
But then I got a second email with the exact wording as the first. Then a third and ultimately a fourth.
By the second email, I knew it wasn’t human.
How the Uncanny Valley Appears in Communication
AI allows us to create uncomfortable communication at scale. We can use names, schools, roles, trivia, and other personal information to fake familiarity.
People are recognizing the patterns.
It’s plausible if someone is genuinely referring to a shared experience. “I also went there,” or “my spouse did.”
However, a bare reference to something without a reason to include it is performative.
The issue isn’t the information. It’s the familiarity.
Familiarity is being performed instead of earned.
Accuracy Isn’t the Problem
The details about you and your work, school, or interests may well be factually correct. The failure is that there is no meaning to that information.
It’s like having a private detective follow you around and write down things about you. The investigator is observing you without your consent and seemingly without purpose. It feels invasive.
Knowing personal facts does not create trust.
A shared purpose creates trust.
Trust, Asymmetry, and Boundaries
This unearned familiarity with someone is a great way to erode their trust.
Trust is asymmetric: earned slowly, lost instantly.
Using personal data in an appropriate way requires careful judgment.
Let’s use a sports example. If I ask someone I’ve recently met who their favorite baseball team is, that’s building a relationship. If I declare to that new acquaintance that I know they are a Mets fan and ask if they’ve ever been to Citi Field, then it becomes intrusive.
This is what the message is really saying:
I can know things about you without caring how that feels to you.
The availability of data does not grant permission to use it. That’s letting efficiency replace care and judgment, and people feel it immediately.
Repetition Reveals the Machine
If someone were to receive a single email with some personal information or trivia, it might seem odd, but also not unreasonable. That was my thought in the example above.
But we’re beyond the point at which people receive that single email. Email personalization is now built into email and CRM platforms.
The repetition of emails with irrelevant personal information has exposed the pattern. The sheer volume has lifted the mask.
People receive dozens of “personalized” messages using the same popular tools, the same templates, and following the same structure.
The personalized marketing message has become its own genre. People react with, “Oh, it’s another one of these.”
And the irony is that personalization only works when it’s rare and human.
Using automation to “personalize” has made it common and synthetic.
Once Detected, Trust Collapses Quickly
Trust drops sharply once artificiality is detected.
At scale, that loss may not be recoverable.
These gains in efficiency are dwarfed the trust they destroy. The low response rates may be self-inflicted by marketing and sales communication that feels fake.
It’s not that people are harder to reach. People are actively protecting themselves.
The Quiet Reversal
Less personalization can feel more human.
Familiarity should follow contact, not precede it.
Referencing shared experience is appropriate. Pulling personal details out of thin air is intrusive.
Automation imitates care. It does not create it.