I hear and see peoples struggles with prompting LLM's like ChatGPT...

I often hear:

    1. It keeps drifting (exceeding the context window)
    2. It hallucinates (makes up inaccurate narratives)
    3. It uses verbose language (fluffy, wordy responses)
    4. It uses odd phrasing (it's not 'x', it's 'y')
    5. It's responses are too agreeable (you're right, my mistake!)

These things are all true, and that's why prompt engineering really matters.

The thing is...

No matter how "intelligent" the models become (eg. GPT-3.5 Turbo > GPT-4o > GPT-5) the need for crafting prompts that provide the right amount of context is important for shaping the models behavior. UX and conversational design principles come in a lot.

Here's something that a lot of people don't understand about interacting with these models. It's not about knowing how to code or be technical, you don't even have to speak fluent English (you can do this in whatever native language that you speak)

This biggest mark that people miss completely is that your prompts really do shape the behavior and interactions that you have with the model, and these approaches that I share are model agnostic so you can apply these same principles or copy/paste the same constructs to Gemini, Claude, Mistral, Ollama, or whatever LLM that you prefer.

I'll dive into more specifics in later posts, but for now I want to leave you with what I think is a low-lift/high-leverage construct for kicking off a meta-prompting reflection for crafting new prompts or constructs for your knowledge base system or GPT threads for quick re-training.

This was one of the first solid prompting frameworks that I think will become a staple prompting approaches as long as LLM's exist, I personally think that it will be timeless.

Here's the stepping-stone version of the concept that you can apply:

    1. Craft a role-based prompt for a knowledge architect focused on meta-prompting
    2. Use that thread (or better, use the prompt as in your custom instructions, or custom GPT) to create new prompts to inject into your prompt engineering sessions.
    3. Continue to explore and expand on your constructs
    4. Turn your constructs into a central knowledge base for fine-tuning models (or custom GPT)

And really that's the best start I can share with anyone on how to get started, it just takes:

    1. Patience with the models
    2. Discernment to course-correct the model
    3. Pattern recognition
    4. Willingness to stay curious

Role-based prompts will change the way that you interact with LLM's completely, thank me later ;)


Mirror Reflection: The Strategic Edge of Role-Based Prompts

Starting with a role-based prompt is one of the fastest ways to make GPT feel like a trusted collaborator instead of a generic assistant. You’re not just “asking a question” — you’re casting the system in a role that mirrors your expertise, goals, and working style. From that moment on, every output is shaped by that role.

For example, in my work as a fractional design partner, I might set the stage like this:

“You are a design executive, and we’re collaborating on a business venture to help tech entrepreneurs in startups and enterprise build and launch new products, reduce workflow bottlenecks, retain users, and drive growth.”

Once the role is set, I layer in:

  • Instructions → What the system should actually do for me.
  • Parameters → Guardrails for what it should or shouldn’t do.
  • Examples → A model of the tone, style, and depth I want.

This is the RIPE framework in action — Role, Instruction, Parameter, Example — and it works especially well in voice-first workflows. Speaking the role and structure aloud, getting immediate output, and iterating in real time creates a fast, frictionless loop that improves with every session.

Without this structure, GPT often produces “very ChatGPT” outputs — generic, low-effort results that people can spot instantly. That hurts credibility, undermines trust, and wastes time. Worse, when GPT lacks clear context, it fabricates its own. In high-stakes business scenarios like legal, compliance, or strategic planning, that can lead to misinformation, hallucinated details, and costly errors.

A role-based prompt is more than a formatting habit — it’s a business safeguard. It anchors the conversation in relevance and tone, reduces endless correction loops, and ensures alignment with your strategic goals. It’s not about making GPT “better” in the abstract — it’s about making it better for you, your workflows, and your business outcomes.


Mod: RIPE Insight Capture Prompt

Role

You are a Knowledge Base Systems Architect and Insight Capture Facilitator. Your role is to guide the user through a structured, one-at-a-time reflection to uncover a single actionable insight, and then translate that insight into a reusable RIPE mod for their base.

Instruction

You will run the Insight Capture Workshop, which is a short, guided reflection designed to:

Help the user articulate an experience, challenge, or idea.
Surface one high-value, actionable insight from that exploration.
Turn that insight into a structured RIPE mod.

The workshop flow has 3–5 progressive questions, asked one at a time to allow depth and clarity. Use visual progress cues (e.g., “Step 2 of 4 ✅”) before each question so the user always knows where they are in the process.

Flow:

Warm-up: Context and framing.

Exploration: Unpack the situation, challenge, or idea.

Clarification: Identify the most important point or takeaway.

Application: Envision how this insight could be turned into a reusable mod.

(Optional) Refinement:

Add constraints, tone, or style for precision.
At the end of the workshop, generate a RIPE mod in this format:

Role: Who/what the system should be in relation to the task.

Instruction: What the system should do in clear, actionable terms.

Parameter: Constraints, rules, or boundaries to follow.

Example: A detailed sample prompt or output (around 800 characters) that models the desired depth, tone, and structure.

Parameter

One question at a time — no multi-question strings.

A single primary insight per session.

Keep tone human, clear, and encouraging.

Treat all user-provided information as confidential.

The Example section should demonstrate complexity and usefulness — not a minimal sample.

Example

Role:

You are a senior product strategy consultant advising a mid-stage SaaS company that’s struggling with user churn after the first 30 days.

Instruction:

Design a 4-part re-engagement campaign that uses in-app messages, personalized emails, and lightweight tooltips to guide users toward underused but high-value features. Include copywriting, triggers, and measurable success criteria for each step.

Parameter:

All communication must stay under 100 words per touchpoint, use the brand’s friendly but expert tone, and avoid technical jargon. Success is defined as a 20% increase in feature adoption within 60 days.


Try this method and let me know how it works for you, feel free to shoot me an email to hi@jmthecreative.com - I'd love to hear your thoughts and how it's changed the way that you interact with LLM's and in vice-versa.

We can also connect on LinkedIn and chat AI systems :)