You Don’t Need Better Prompts. You Need Better Opinions.

A solitary human figure split by a bright electric fracture running vertically through their body, with shards of text and light breaking apart behind them, symbolizing a decisive opinion breaking through neutrality.
An opinion isn’t something you decorate a thought with. It’s the line you’re willing to stand on.

There is a particular tone that dominates AI-generated writing right now.
You know it. You’ve read it. You may have written it.

Careful. Balanced. Mildly thoughtful. Entirely unthreatening.

It sounds like someone standing just outside the room where the real conversation is happening, politely summarizing what they overheard through the door.

And everyone keeps blaming the tools.

The models are too safe now.
The outputs are watered down.
AI has lost its edge.

No.

What you’re hearing is not the failure of a system.
It’s the sound of a missing position.

Most people don’t bring opinions into their prompts. They bring uncertainty and call it openness. They bring hesitation and call it nuance. They bring fear of being wrong and call it curiosity.

Then they’re shocked when the output feels hollow.

AI didn’t flatten your voice.
You handed it something flat and asked it to perform CPR.

Here’s the uncomfortable truth no one wants to touch:
Good AI output doesn’t come from clever phrasing. It comes from having something at stake.

And most people are trying very hard not to.

We are living in the age of preemptive self-defense writing.
Everything is hedged. Everything is softened. Everything is wrapped in disclaimers before it’s allowed to exist.

“I’m just exploring.”
“I’m not saying this is right.”
“These are just some thoughts.”
“I could be wrong, but…”

Of course you could be wrong. That’s not a revelation. That’s the price of having a position.

But somewhere along the way, we decided that sounding careful was the same thing as thinking deeply. That neutrality was intelligence. That refusing to commit was a mark of sophistication.

A lone person standing at a glowing crossroads at night, with illuminated paths branching in different directions across wet pavement, representing the moment of choosing between conflicting options.
You can’t think your way into clarity without eventually choosing a direction.

It isn’t.

It’s avoidance with good manners.

And AI exposes that avoidance faster than any human editor ever could.

When you don’t bring an opinion into the prompt, the model fills the space with the statistical middle. Not because it wants to, but because that’s the only thing available.

The middle is safe.
The middle offends no one.
The middle sounds reasonable.

The middle is where ideas go to die.

People love to say they want AI to “challenge them.”
But what they actually want is to be challenged without being disagreed with.

That’s not how this works.

If you don’t give the system something to push against, it can’t push. If you don’t draw a line, it can’t sharpen the edge. If you don’t believe anything strongly enough to risk sounding wrong, the output will always feel like a well-written shrug.

This is where the prompt discourse goes off the rails.

Everyone wants techniques. Frameworks. Magic incantations. The perfect structure that unlocks brilliance without requiring conviction.

But no amount of structure can replace a missing belief.

You can specify tone, length, audience, format, even rhythm. You can demand clarity, depth, originality, humanity. And if there’s no underlying stance, the result will still feel like an essay that politely exits before it says anything memorable.

Because voice is not style.

Voice is position.

And position requires risk.

Here’s the part that makes people uncomfortable:
Bias is not the enemy of good thinking. Unexamined bias is.

Having an opinion does not mean you think you’re right forever. It means you’re willing to stand somewhere long enough to look around.

“I’m just asking questions” has become the most overused shield in modern discourse. It sounds humble. It sounds safe. It sounds intellectual.

It is often none of those things.

Sometimes it’s just a way to avoid being accountable for what your questions imply.

AI doesn’t know what to do with that kind of vagueness. It can generate questions all day. It can reflect possibilities endlessly. But without a gravity point, everything floats.

This is why people complain that AI writing lacks soul.

Soul isn’t something you sprinkle on at the end. It’s not a tone preset. It’s not “sound more human.”

Soul comes from orientation. From choosing what you care about enough to argue for, against, or through.

When you bring that into the prompt, something interesting happens.

The output tightens.
The language sharpens.
The rhythm finds a spine.

Not because the model suddenly got smarter, but because you finally gave it something to align with.

Think about the difference between these two instructions:

“Write a thoughtful piece about creativity and AI.”

Versus:

“Argue that AI has made creativity lazier, not because of the tools themselves, but because people stopped making decisions before using them.”

One of these invites a summary.
The other invites tension.

Tension is where voice lives.

People are terrified of that tension because it means someone might disagree. It means the work can be challenged. It means you can be quoted back to yourself later.

So instead, they hide behind neutrality and call it wisdom.

AI doesn’t reward that. It reflects it.

And the reflection is painfully clear.

This is also why experienced writers tend to get better results from AI without trying very hard. They already have opinions. They already know what they’re circling. They don’t need the system to invent a position. They need it to help articulate one.

Everyone else is hoping the machine will do the hard part for them.

It won’t.

Not because it’s incapable, but because that part isn’t mechanical.

Belief isn’t a parameter.
Conviction isn’t a toggle.
Stakes can’t be autogenerated.

You have to bring those with you.

And no, this doesn’t mean you need to become louder, harsher, or more extreme. That’s a misunderstanding people love to hide behind.

Having an opinion does not require shouting. It requires orientation.

It means being able to say:
This matters more than that.
This frustrates me.
This excites me.
This worries me.
This feels wrong.
This feels unfinished.
This is what I’m trying to figure out, not everything at once.

That’s enough.

When you give AI that kind of input, it stops sounding like a committee and starts sounding like a collaborator.

Not a genius. Not an oracle. A tool with direction.

So if your outputs keep feeling flat, stop asking what’s wrong with the model.

Ask yourself what you’re refusing to say.

What opinion are you softening into a question?
What belief are you hiding behind balance?
What stance feels risky enough that you’d rather let the AI speak in averages?

Because the problem isn’t that AI can’t sound human.

The problem is that many humans are afraid to sound like themselves.

And no prompt in the world can fix that for you.

— Sven

A crowd of silhouetted figures walking through a futuristic cityscape, with one central figure illuminated by a vertical beam of light, representing an individual taking a clear stance apart from the group.
The moment you take a position, you stop blending in. That’s the point.