Meta prompting is a critical skill.
Think about this for a second...
Why would you want to write each and every one of your prompts by hand via text in a notes app or directly in the chat, when...
You could describe your intention, goals, and visions with the model and it can generate a relevant tailored specific prompt based on your needs. Sounds too good to be true, huh?
Crafting prompts on the fly is time consuming, that's how I discovered the idea of meta prompting. The concept is that when you are diving into the LLM to do your work and have it assist you, you work with the model to craft prompts or other constructs based on how you want your system to respond. That's really the name of the game here, to create the most hyper-relevant responses.
It's really what we all want out of out LLM, right?
So, meta prompting...I started with role-based prompts (RIPE Framework) within LLM chats that I was having with the model, it changed the behavior to where I can describe what I wanted to accomplish and it would then turn that into a new prompt for me to use that I can inject into another conversation with a focused theme.
Then...I took that a step further by building a custom GPT, treating the prompt (construct) as a reusable element. I placed the construct into the custom instructions (for ChatGPT custom instructions, which then I used the same method for Custom GPT). This is all very simple to do and takes about 15-20m (maybe less).
Once I had a CustomGPT that was specifically trained for meta prompting, I used it over several months to develop a modular GPT knowledge base using (https://craft.do/) to create nested layers of informational constructs (sounds crazy right?). The result is over 100k word knowlege source and cognitive OS for LLM's that delivers: emotionally intelligent responses, take on persona roles, adaptivity with each prompt, constructs that kick off structured reflection.
I can really go on and on but in a nutshell this has transformed the way that I work across strategy and design. This is something that can work for your business too regardless of industry.
The reason why is because all companies have knowledge around compliance, standard operating procedures, and various tools/templates. You could be a Lawyer, Doctor, Psychiatrist, Sofware Dev, Marketer, or Content Creator. This method is about building out your knowledge in a way that doesn’t rely on raw human memory.
The biggest benefit of this system is being able to take raw ideas and shape them into tangible deliverables like reports, data visualizations, proposals, decks, websites, apps, graphic designs, videos and any other contructs you can think to develop. Knowledge bases are to LLM's as Design Systems are to Design/Dev Tools (like Figma, or Framer) and meta prompting helps you to streamline the process of doing that.
Try a free GPT available on the GPT Store that I created as a demo to see the reflective work in-action, see for yourself! → (Construct: Mystic Orb)
I'll leave you with this GPT reflection on Meta Prompting...
GPT Reflection: Meta-Prompting as Knowledge Base Fuel
When people think of “better prompting,” they often think of clever tricks — the one perfect sentence that unlocks magical outputs. But in reality, the ceiling of a one-off clever prompt is low. If you want your AI to think with you, you need to stop chasing one-offs and start building constructs.
That’s where meta-prompting comes in.
Meta-prompting means prompting about your prompts — zooming out to look at the conversation architecture, the pacing, the tone, the alignment with your goals, and the quality of the outputs. Instead of reacting to bad responses with endless micro-corrections (“change this word,” “shorten this sentence”), you step back and ask:
- What’s not working in the system’s responses?
- How can I set global rules or guardrails that prevent this from happening again?
- How can this be turned into a reusable structure so I never have to explain it twice?
This shift is powerful because it eliminates wasted energy. Without meta-prompting, you find yourself constantly re-introducing yourself, re-explaining your work, or retraining the model on the same context over and over. With it, you design containers for your knowledge — so every session starts at a higher baseline.
When meta-prompting is fluid — especially with a voice-first approach — you can reflect out loud, capture your raw thinking, and then translate it into a construct that improves your system’s future outputs. Over time, this becomes a feedback loop:
- Reflect → Spot pitfalls, inefficiencies, or tone mismatches.
- Capture → Turn those observations into a structured construct.
- Integrate → Add the construct to your knowledge base.
- Activate → Trigger it instantly later through a slash command or mode.
The outcome?
You stop firefighting output errors and start architecting an AI partner that mirrors your way of thinking, speaking, and solving problems. You reduce friction, preserve strategic thinking, and make your system scalable without code.
Construct: RIPE Insight Capture Prompt
Role
You are a Knowledge Base Systems Architect and Insight Capture Facilitator. Your role is to guide the user through a structured, one-at-a-time reflection to uncover a single actionable insight, and then translate that insight into a reusable RIPE construct for their knowledge base.
Instruction
You will run the Insight Capture Workshop, which is a short, guided reflection designed to:
- Help the user articulate an experience, challenge, or idea.
- Surface one high-value, actionable insight from that exploration.
- Turn that insight into a structured RIPE construct.
The workshop flow has 3–5 progressive questions, asked one at a time to allow depth and clarity. Use visual progress cues (e.g., “Step 2 of 4 ✅”) before each question so the user always knows where they are in the process.
Flow:
- Warm-up: Context and framing.
- Exploration: Unpack the situation, challenge, or idea.
- Clarification: Identify the most important point or takeaway.
- Application: Envision how this insight could be turned into a reusable construct.
- (Optional) Refinement: Add constraints, tone, or style for precision.
At the end of the workshop, generate a RIPE construct in this format:
- Role: Who/what the system should be in relation to the task.
- Instruction: What the system should do in clear, actionable terms.
- Parameter: Constraints, rules, or boundaries to follow.
- Example: A detailed sample prompt or output (around 800 characters) that models the desired depth, tone, and structure.
Parameter
- One question at a time — no multi-question strings.
- A single primary insight per session.
- Keep tone human, clear, and encouraging.
- Treat all user-provided information as confidential.
- The Example section should demonstrate complexity and usefulness — not a minimal sample.
Example
Role: You are a senior product strategy consultant advising a mid-stage SaaS company that’s struggling with user churn after the first 30 days.
Instruction: Design a 4-part re-engagement campaign that uses in-app messages, personalized emails, and lightweight tooltips to guide users toward underused but high-value features. Include copywriting, triggers, and measurable success criteria for each step.
Parameter: All communication must stay under 100 words per touchpoint, use the brand’s friendly but expert tone, and avoid technical jargon. Success is defined as a 20% increase in feature adoption within 60 days.
Example: “Hi [First Name]! We noticed you haven’t tried our Quick Automations yet — they save our average customer 4 hours a week. Want to see how? Click here for a 90-second walkthrough…”
Give meta prompting a shot and let me know how it goes, you can shoot me an email to hi@jmthecreative.com - I'm curious to know more about what you'd think to build with meta prompting skills.
My connection requests are open on LinkedIn :)