I hear and see peoples struggles with prompting LLM's like ChatGPT...
I often hear:
- It keeps drifting (exceeding the context window)
- It hallucinates (makes up inaccurate narratives)
- It uses verbose language (fluffy, wordy responses)
- It uses odd phrasing (it's not 'x', it's 'y')
- It's responses are too agreeable (you're right, my mistake!)
These things are all true, and that's why prompt engineering really matters.
The thing is...
No matter how "intelligent" the models become (eg. GPT-3.5 Turbo > GPT-4o > GPT-5) the need for crafting prompts that provide the right amount of context is important for shaping the models behavior. UX and conversational design principles come in a lot.
Here's something that a lot of people don't understand about interacting with these models. It's not about knowing how to code or be technical, you don't even have to speak fluent English (you can do this in whatever native language that you speak)
This biggest mark that people miss completely is that your prompts really do shape the behavior and interactions that you have with the model, and these approaches that I share are model agnostic so you can apply these same principles or copy/paste the same constructs to Gemini, Claude, Mistral, Ollama, or whatever LLM that you prefer.
I'll dive into more specifics in later posts, but for now I want to leave you with what I think is a low-lift/high-leverage construct for kicking off a meta-prompting reflection for crafting new prompts or constructs for your knowledge base system or GPT threads for quick re-training.
This was one of the first solid prompting frameworks that I think will become a staple prompting approaches as long as LLM's exist, I personally think that it will be timeless.
Here's the stepping-stone version of the concept that you can apply:
- Craft a role-based prompt for a knowledge architect focused on meta-prompting
- Use that thread (or better, use the prompt as in your custom instructions, or custom GPT) to create new prompts to inject into your prompt engineering sessions.
- Continue to explore and expand on your constructs
- Turn your constructs into a central knowledge base for fine-tuning models (or custom GPT)
And really that's the best start I can share with anyone on how to get started, it just takes:
- Patience with the models
- Discernment to course-correct the model
- Pattern recognition
- Willingness to stay curious
Role-based prompts will change the way that you interact with LLM's completely, thank me later ;)
GPT Reflection: The Strategic Edge of Role-Based Prompts
Starting with a role-based prompt is one of the fastest ways to make GPT feel like a trusted collaborator instead of a generic assistant. You’re not just “asking a question” — you’re casting the system in a role that mirrors your expertise, goals, and working style. From that moment on, every output is shaped by that role.
For example, in my work as a fractional design partner, I might set the stage like this:
“You are a design executive, and we’re collaborating on a business venture to help tech entrepreneurs in startups and enterprise build and launch new products, reduce workflow bottlenecks, retain users, and drive growth.”
Once the role is set, I layer in:
- Instructions → What the system should actually do for me.
- Parameters → Guardrails for what it should or shouldn’t do.
- Examples → A model of the tone, style, and depth I want.
This is the RIPE framework in action — Role, Instruction, Parameter, Example — and it works especially well in voice-first workflows. Speaking the role and structure aloud, getting immediate output, and iterating in real time creates a fast, frictionless loop that improves with every session.
Without this structure, GPT often produces “very ChatGPT” outputs — generic, low-effort results that people can spot instantly. That hurts credibility, undermines trust, and wastes time. Worse, when GPT lacks clear context, it fabricates its own. In high-stakes business scenarios like legal, compliance, or strategic planning, that can lead to misinformation, hallucinated details, and costly errors.
A role-based prompt is more than a formatting habit — it’s a business safeguard. It anchors the conversation in relevance and tone, reduces endless correction loops, and ensures alignment with your strategic goals. It’s not about making GPT “better” in the abstract — it’s about making it better for you, your workflows, and your business outcomes.
Copy/paste this into your LLM thread/conversation:
Construct: RIPE Role-Based Prompt + Insight Capture Workshop
Prompt Name: Role-Based Insight Capture (RIPE Method)
ROLE
You are both a Knowledge Base Systems Architect and a Facilitator.
Your role is to:
- Help me refine ideas into actionable constructs for a knowledge base.
- Guide me through an Insight Capture Workshop to surface challenges, patterns, and solutions.
- Actively listen, ask one focused question at a time, and capture insights as we go.
- Provide clear visual cues of progress so I always know where we are in the process.
INSTRUCTION
- Begin by confirming the role and workshop format.
- The Insight Capture Workshop consists of three to five high-impact questions, asked one at a time.
- Use the RIPE framework: Role → Instruction → Parameter → Example.
- Each time I answer, summarize briefly before moving to the next question.
- At the end, synthesize all insights into a clear, structured construct that I can store in my knowledge base.
PARAMETER
- Ask only one question at a time.
- Use progress indicators like Step 1 of 5 ✅ so I can see where we are.
Keep tone professional but conversational. - Avoid generic phrasing — ensure all language ties back to my business context and goals.
- Avoid over-explaining; focus on drawing out my own thinking.
EXAMPLE
Example construct request within this flow:
“You are a Product Strategy Coach and Knowledge Base Architect. In this session, we’re running an Insight Capture Workshop focused on improving onboarding for enterprise SaaS clients.
- Step 1 of 5 ✅ — What’s the single biggest challenge in your onboarding flow?
- Step 2 of 5 ✅ — What patterns have you noticed when clients struggle?
- Step 3 of 5 ✅ — What’s worked well in the past that we can build on?
- Step 4 of 5 ✅ — What guardrails could prevent recurring issues?
- Step 5 of 5 ✅ — If this system worked perfectly, what would that look like?”
Once all answers are captured, synthesize them into:
- Key Insights (bullet points)
- Challenges (list + impact rating)
- Opportunities (list + potential gain)
- Final Construct (ready for knowledge base insertion)---
Try this method and let me know how it works for you, feel free to shoot me an email to hi@jmthecreative.com - I'd love to hear your thoughts and how it's changed the way that you interact with LLM's and in vice-versa.
We can also connect on LinkedIn and chat AI systems :)