The Spectrum of Specificity
Prompts can range from highly directive to open-ended. The right level depends on your use case.Directive Prompts (High Specificity)
Use when you want consistent, predictable behavior and know exactly what should happen.- Documentation generation with specific formats
- Code review with defined standards
- Test generation following rules
- Compliance checks with requirements
Open-Ended Prompts (Low Specificity)
Use when you want the agent to adapt to different situations and make intelligent decisions.- Code review for varied issues
- Investigating data quality problems
- Impact analysis
- Root cause diagnosis
Numbered Steps vs. Natural Language
Use Numbered Steps When:
✅ The workflow is sequentialUse Natural Language When:
✅ The agent needs flexibilityCommon Patterns
Pattern 1: Analyze → Decide → Act
Many successful agents follow this three-phase structure:Pattern 2: Check → Branch
Use conditional branching for different scenarios:Pattern 3: Detect → Investigate → Remediate
For responding to problems:Anti-Patterns to Avoid
❌ Anti-Pattern 1: Assuming Context
❌ Anti-Pattern 2: Vague Quality Criteria
❌ Anti-Pattern 3: Missing Error Handling
❌ Anti-Pattern 4: Doing Too Much
Debugging Prompts That Don’t Work
Strategy 1: Add Explicit Reasoning
Ask the agent to explain its thinking:Strategy 2: Start Minimal, Add Incrementally
If a complex prompt isn’t working, simplify:Strategy 3: Provide Examples
Show the agent what you want:Progressive Refinement Examples
Example 1: Documentation Updater
Version 1: Too VagueExample 2: Breaking Change Detector
Version 1: Too BroadBest Practices Summary
Start with the outcome
Start with the outcome
Begin prompts with what success looks like, not implementation details.Good: “Generate comprehensive documentation that includes purpose, grain, and column meanings.”Bad: “Query the table, count rows, list columns, write YAML…”
Be explicit about edge cases
Be explicit about edge cases
Tell the agent how to handle unusual situations.Good: “If the model is >10M rows, sample 10K rows for profiling. If it’s a view, note that in the description.”Bad: Assume the agent knows what to do with large models.
Specify output format
Specify output format
Show examples of what you want the agent to produce.Good: Include example YAML, example PR comments, example Slack messages.Bad: Say “create appropriate output” without examples.
Use tools effectively
Use tools effectively
Reference specific tools when you need them.Good: “Use
retrieve_metadata to profile columns. Use grep to find downstream references.”Bad: Assume the agent will choose the right tool.Test incrementally
Test incrementally
Start simple, validate, then add complexity.Good: Build the prompt step by step, testing each addition.Bad: Write a huge prompt all at once without testing.
Next Steps
Creating Agents
Complete guide to agent configuration
Testing & Debugging
Test and refine your prompts locally
Tools & Permissions
Understand what tools agents can use