LLM Prompting Cheatsheet
Patterns that actually move model performance — not magic words.
1 credit
Structure
- **Role**: start with what it is ("You are a code reviewer") — anchors tone.
- **Goal**: state the outcome in one sentence, not the steps.
- **Context**: relevant facts — project, constraints, data samples — top of prompt.
- **Format**: explicit output shape. "Return JSON with keys: a, b, c. No prose."
- **Examples (few-shot)**: 1-3 input→output pairs. This beats clever instructions.
Techniques that help
5 itemsChain-of-thought
"Think step-by-step" before the answer — improves multi-step reasoningSelf-critique
"Draft, then critique, then rewrite" — catches obvious errorsDecomposition
Break a big task into sequential prompts — each prompt's output feeds nextGrounding
Paste relevant docs/code in context instead of assuming memorized knowledgeConstraints
"Max 100 words" / "Use only listed functions" — limits hallucinationAnti-patterns (don't bother)
- "You are an expert" stacked 5 ways — marginal at best.
- Threats or flattery — modern models ignore these.
- Asking for sources without grounding — it'll invent URLs.
- Super-long instructions + tiny context — it'll skip instructions mid-completion.
Debugging bad output
- Lower temperature if output is random / inconsistent.
- Move important constraints to the **end** of the prompt (recency bias).
- Ask it to **echo its understanding** first — catches misread instructions.
- Test with a smaller, clearly-wrong example — if it still fails, the prompt is broken.