The Prompt Lab — Negative Space Prompting
Learn the negative space prompting prompting technique with concrete before/after examples.
Negative Space Prompting
The Technique
Negative space prompting means explicitly telling the model what not to do, produce, or include — alongside what you want. It works because LLMs default toward the statistical center of training data, which means generic phrasing, hedging language, and predictable structure unless you actively wall those off.
The Naive Prompt
Write a subject line for a cold email to a VP of Engineering
at a mid-size SaaS company. We're selling a developer
observability tool that reduces incident response time.
Why It Falls Short
Without guardrails, the model gravitates toward the most common patterns in its training data — which for cold email subject lines means tired formulas like “Reduce Incident Response Time by 40%” or “Quick question, [First Name].” You get output that’s technically correct but indistinguishable from every other cold email flooding that VP’s inbox. The model has no way to know what you’ve already tried or what counts as “too salesy” in your context.
The Improved Prompt
Write a subject line for a cold email to a VP of Engineering
at a mid-size SaaS company. We're selling a developer
observability tool that reduces incident response time.
Do NOT:
- Use percentage claims or ROI figures (e.g., "Cut MTTR by 40%")
- Use "quick question" or any faux-casual opener
- Mention our company name or product name
- Use exclamation points
- Frame it as a pitch — it should feel like a peer observation,
not a sales hook
The subject line should be under 8 words and make a VP of
Engineering slightly curious, not immediately skeptical.
Why It Works
The exclusion list collapses the model’s output space from “everything plausible” to “the good stuff that’s left.” By naming the specific failure modes — percentage claims, faux-casual openers, product name drops — you’re essentially describing the local minimum the model would otherwise land on, and forcing it away. The framing constraint at the end (“peer observation, not a sales hook”) does positive work too, but the negative constraints are what prevent regression to the mean.
A prompt like this on Claude Sonnet 4.6 or GPT-5.4 will typically produce subject lines like “Your on-call rotation is hiding something” or “Most teams don’t see this until it’s too late” — lines that don’t read as AI-generated boilerplate.
When to Use This
- Creative and copy tasks where “average” is actively harmful — ads, subject lines, headlines, product descriptions, or any context where generic output costs you money or credibility.
- Recurring workflows where you’ve already run the naive prompt and know exactly what you don’t want — negative space prompting is the fastest way to encode that institutional knowledge without rewriting from scratch.
- Constrained formats where the model has strong priors that fight you: structured data extraction, legal language, or any domain where training data pulls hard toward a specific style you’re trying to avoid.
Next week in The Prompt Lab: a technique for getting models to hold uncertainty correctly instead of confabulating confidently.