What to do before you prompt
A live playbook for using AI as a thinking partner. Right now AI feels a little like Mario Kart.
The race has already started. Every day someone discovers a new workflow, some new prompt trick, or a new way to move faster.
It’s easy to feel like everyone else is already on lap three. But after spending a lot of time working with these tools, I’ve noticed something simpler.
Most people are just at different stages of learning how the engine works. This session introduced a model I use to think about those stages.
I call it The AI Capability Ladder.
This post expands on that idea and includes examples, prompts, and resources from the session.
If you’re trying to get better at using AI as a thinking partner, this is a good place to start.
The AI Capability Ladder
Most discussions about AI focus on tools, models, or clever prompts.
After spending a lot of time working with these systems, a different pattern starts to appear. People tend to move through a few recognizable stages as they learn how to work with them.
The gap between someone who struggles with AI and someone who gets a lot out of it usually isn’t technical ability. What separates them is how far along they are in that progression.
1. Driver
Almost everyone starts here.A prompt goes in and the model produces an answer.
For a while, that’s more than enough. The responses feel surprisingly helpful, sometimes even impressive. A question turns into an outline, a paragraph, or a handful of ideas.
At this stage the interaction is simple and reactive. The model does the heavy lifting and the user reacts to whatever comes back.
It works, but the ceiling appears quickly.
2. Curious Driver
Eventually curiosity takes over.
Instead of treating the model like a search box, people begin to experiment. A prompt gets rewritten.
Context is added. The same question is asked in a slightly different way.
The interaction begins to resemble a conversation.
Responses get refined. The model is asked to critique its own work. Prompts evolve mid-stream as the user nudges the output toward something better.
This is usually the moment when people realize the system has far more flexibility than it first seemed.
3. Mechanic
With enough experimentation another shift happens.
The model stops feeling mysterious.
Patterns begin to emerge. Some prompts consistently lead to weak answers. Others reliably produce sharper thinking. Context that once seemed optional turns out to matter a lot.
At this stage the interaction becomes deliberate. Instead of hoping for useful output, the user starts guiding the process more intentionally.
The model still does the work, but the direction of that work becomes clearer.
4. Engine Builder
Eventually the interaction moves beyond individual prompts.
Attention shifts from the answer to the process that produces the answer.
Reusable prompts begin to appear. A workflow emerges. Generate ideas, critique them and then refine the strongest ones. Prompts get chained together so the output of one step feeds the next.
At this stage the emphasis is no longer on extracting a single good response. The focus moves toward building processes that consistently produce strong results.
This is where ideas like meta prompting enter the picture. Instead of asking the model for an answer, the model helps design the system that generates better answers.
Why This Model Matters
Many people assume the gap in AI usage comes from intelligence, access, or some hidden technical trick.
In practice the difference is usually much simpler.
People are just standing on different rungs of the same ladder.
Once that becomes clear, the whole topic feels less mysterious. Progress doesn’t require mastering everything at once. Moving up one rung at a time is enough.
The biggest difference between beginners and power users is usually time spent under the hood.
Key Ideas From the Session
Prompting shouldn’t be one-shot: Most useful AI interactions involve iteration and refinement.
Context dramatically improves results: The model performs better when it understands the problem clearly.
Structure improves thinking: Asking AI to reason through structured frameworks leads to better output.
Perspective changes answers: Different lenses produce different insights.
Prompt systems beat individual prompts: Reusable workflows compound value over time.
Prompt Examples
Example #1:
Basic prompt:
Give me startup ideas.Improved prompt:
Give me 10 startup ideas with these constraints
B2B SaaS
- solves a painful workflow
- AI meaningfully reduces labor
- monetization is obvious
- the initial wedge is narrow enough to test quicklyExample #2:
Prompt improvement loop:
Write three positioning options for this product.Now critique the weaknesses in each one.Now rewrite the strongest version.Example #3:
Meta prompt:
Design a reusable prompt that helps founders generate high-quality startup ideas.
The prompt should:
- ask for key context first
- force constraints
- output ideas in a structured format founders can evaluate.Exercises to Try
Exercise #1
Take a prompt you normally use and do three things:
Add more context than you think you need.
Ask the model to critique its response.
Ask it to improve the response based on that critique.
Notice how the output changes.
Exercise #2
Ask AI to explain its reasoning using a structure such as:
First principles
Pros and cons
Hidden assumptions
Compare the result to a normal answer.