We’re Afraid It Looks Too Familiar
Everyone keeps saying they’re worried about artificial intelligence becoming too smart.
That’s generous.
What people actually seem afraid of is this:
AI thinking is starting to resemble the way a lot of humans already think, and the resemblance is unflattering.
Because once you strip away the sci-fi language, most AI systems don’t “think.”
They pattern-match.
They remix.
They predict the most likely next move based on past behavior.
And here’s the uncomfortable part: so do we.

A large chunk of human decision-making is habit dressed up as reasoning. We repeat what worked before. We copy what got approval. We follow templates we didn’t design and call it judgment. Then we act shocked when a machine does the same thing faster and without the emotional backstory.
AI didn’t expose a future problem. It exposed a present one.
When people complain that AI is “soulless,” what they usually mean is that it doesn’t perform the little rituals we use to convince ourselves we’re being thoughtful. It doesn’t hesitate theatrically. It doesn’t agonize publicly. It doesn’t narrate its growth arc. It just produces an answer.
And that’s unsettling, because many human answers are produced the same way, just with better PR.
The panic around AI creativity is especially revealing. We insist creativity must be mysterious, ineffable, untouchable. Then we watch people churn out formulaic work, trend-chasing ideas, and recycled opinions while calling it authentic expression.
When AI does that, suddenly it’s a crisis.
What’s really being defended here isn’t creativity. It’s the belief that repetition sounds better when a human does it.
The same thing happens with ethics. People demand that AI be transparent, accountable, aligned. Fair enough. But those standards are rarely applied to humans with the same enthusiasm. We forgive bias when it comes with confidence. We excuse shortcuts when they come with charisma. We call it intuition.
When a machine mirrors that behavior, we don’t like what we see.
So we invent a scarier story: runaway intelligence, loss of control, machines replacing us.
But replacement was never the threat.
Reflection was.

AI forces a question most people have spent years avoiding:
How much of what I call “thinking” is actually just pattern compliance?
That’s not a question technology created. It’s a question technology made harder to dodge.
You can ban tools. You can slow adoption. You can write a thousand think pieces about the soul. None of that changes the underlying discomfort.
Because the problem isn’t that machines think too much.
It’s that humans have been getting away with thinking too little, and now there’s a comparison.
And comparisons are unforgiving.