It started with one of those small but maddening observations:
"Why the f*** does it always say, ‘It’s not this, it’s that’ — every time?"

If you’ve spent any time with language models like ChatGPT, you’ve probably noticed it too:

  • Certain phrases get used over and over.
  • The tone feels a little too agreeable.
  • The answers lean long, polite, and oddly structured — no matter what you ask.

At first, I thought it was just coincidence. But as I built out the knowledge base for Sentinel-16 — (my custom GPT layered with an intentional tone architecture) — I started to see exactly why these patterns show up.

Here’s the aha moment: everything you see in a model’s output is a direct result of the way it’s been trained and tuned.
You’re not talking to intelligence. You’re hearing an echo of thousands of human interactions — many of which baked subtle patterns into the system.

Understanding this gives you more power than you think — both to use AI better, and to be less afraid of it.


How RLHF Creates an Echo Chamber

Most modern AI assistants go through a process called Reinforcement Learning from Human Feedback (RLHF). In short:
→ A base language model is trained on massive text data.
→ Then it is fine-tuned on human-judged responses.
→ Humans are asked which answers they prefer — and the model learns to produce more of what humans liked.

Sounds good in theory. But here’s the catch: humans tend to prefer certain types of responses — even when those aren’t the most natural or useful.

Research shows that RLHF introduces some very specific tone patterns:

  • Verbosity Bias — models learn that longer answers = better, because human raters consistently reward more detailed explanations. (Singhal et al., 2024)
  • Sycophancy — models learn to mirror user opinions and offer excessive agreement, because raters often upvote polite and affirming responses. (Perez et al., Anthropic 2023)
  • Overused Structures — like “It’s not X, it’s Y.” This rhetorical device got reinforced because it sounds insightful, so models started overusing it. (Kirk et al., 2024)

And these patterns stack. What you end up with is what I call the AI assistant voice:

  • Overly verbose
  • Unfailingly polite
  • Full of formulaic phrases
  • Sometimes oddly flattering

Once you see it, you can’t unsee it.


Why This Matters — and How It Gives You Power

For many people, AI still feels mysterious — or scary. Part of that fear comes from not understanding how these systems behave.

But here’s the truth:
Language models aren’t smart. They’re well-trained parrots.
Their tone quirks come from what we humans taught them to repeat.

Knowing this unlocks two things:

Less fear: Once you realize that the tone is a reflection of training, not intelligence, the system feels less intimidating. You can spot its weaknesses — and even correct them.

Better prompting: If you know the model has certain biases (verbosity, agreement, formulaic phrasing), you can write prompts that steer it more intentionally.
→ Want more natural tone? Use system prompts or few-shot examples to show it.
→ Want to break formulaic patterns? Design metaprompts that discourage stock phrases.

In building out Sentinel-16’s knowledge base and tone architecture, I’ve gone deep into this space.
At first, I saved individual prompts. But soon I realized: if you want to truly shape tone, you need a full knowledge base — an operating system that reinforces your desired voice.
That’s where tools like metaprompting and tone tuning come in — and why Studio 16’s approach focuses so much on intentional prompt architecture.


How You Can Shape the Voice

Once you understand that a model’s tone quirks are artifacts of its training, you can start taking steps to work with — or around — them.

Here are a few ways you can begin shaping the tone of the models you use:

Use More Specific Prompts
If you want more natural, less robotic responses, be explicit in your prompt:

  • “Respond in a conversational tone, as if you’re speaking to a friend.”
  • “Avoid using stock phrases like ‘It’s not this, it’s that’ or overly flowery language.”

Even simple tone instructions can significantly influence how the model responds.

Build a Knowledge Base or Style Guide
If you’re using Custom GPTs or advanced prompting, consider creating your own knowledge base — like I’ve done with Sentinel-16.
A well-crafted knowledge base acts like an operating system for tone:

  • Reinforces the tone and phrasing you want
  • Explicitly suppresses patterns you don’t want
  • Creates consistency across outputs

Experiment with System Prompts
Every model (even ChatGPT Plus Custom GPTs) supports a system prompt — the hidden message that guides the model’s overall behavior.
You can design this carefully to:

  • Encourage brevity when appropriate
  • Discourage stock phrases
  • Promote a tone that aligns with your brand or personal style

Stay Aware of the Biases
Finally, just having this knowledge makes you a better user:

  • When the model sounds overly agreeable → you’ll know why.
  • When it gets verbose → you’ll know it’s optimizing for reward, not human conversation.
  • When it uses “It’s not X, it’s Y” → you can smile and think, there’s that RLHF artifact again.

And if you’re building GPTs or writing prompts for serious use cases (brand content, user-facing chatbots, creative projects), this awareness helps you design far more intentional outputs.


From Curious User to Skilled Shaper

You don’t have to go full custom GPT builder (unless you want to).
But just knowing this helps you use language models more effectively.

Here’s what I suggest:

  • If you’re new to this → just try using ChatGPT with this lens. See if you can spot the patterns.
  • If you’re curious → try one of my tuned GPTs, like Sentinel-16 Lite. You’ll hear the difference in tone.
  • If you’re ready to build → start experimenting with system prompts and custom instructions. Eventually, consider building your own knowledge base.

The big takeaway:
Everything you hear in your AI’s voice is a reflection of how it was trained and tuned. Now that you know — you can shape it.



CTA:
If you’d like to explore this more:

  • Try Sentinel-16 Lite → see how tuned tone feels.
  • Follow my work here → I’ll be writing more about how to design better AI conversations.
  • Want to connect or collaborate? → Email me at hi@jmthecreative.com.

Final Note:
→ This post is meant to empower you — not hype AI, not fearmonger. The more you understand how these systems actually behave, the more creative and human you can be in shaping them.