Advanced Techniques
Go beyond basic prompts to get more reliable and higher-quality results.
Reference Existing Code
Point the AI to existing patterns in your codebase:
Look at how authentication middleware is implemented in
app/middleware/auth.py and create a similar middleware
in app/middleware/rate_limit.py that limits requests per IP.
Ask for Explanations
When you need to understand generated code:
Explain the regex pattern you used in the email validation
function and what edge cases it handles.
Use Constraints
Set boundaries to get code that fits your project:
Implement the search feature using only the standard library,
no external dependencies. Target Python 3.10+.
Require Clarifying Questions for Ambiguous Tasks
Prevent incorrect assumptions by telling the model to ask before coding:
If any requirement is ambiguous, ask up to 3 clarifying questions
before proposing implementation details.
Use a Two-Pass Workflow
For complex changes, separate planning from implementation:
- Pass 1 (Plan) — Ask for approach, risks, and files to change
- Pass 2 (Implement) — Ask for the actual patch and tests
Example:
First, provide a plan only: architecture impact, files to edit,
and migration risk. Do not write code yet.
Ground Answers in Real Files and line numbers
Reduce hallucinations by requiring concrete file references:
Only use APIs and modules that already exist in this repository.
List exact files and functions with line numbers you used as references.
If something is missing, state it explicitly instead of inventing it.
Reduce Hallucinations with Evidence-First Prompts
For long documents or policy/code context, ask for evidence extraction before final answer:
First extract the most relevant quotes in <evidence></evidence>.
Then answer only using that evidence.
If evidence is insufficient, explicitly say "insufficient evidence".
This pattern is usually stronger than "just answer the question."
Request Tests Alongside Code
Write a function to parse CSV files with custom delimiters,
and include unit tests covering: empty files, files with headers
only, malformed rows, and unicode content.
Multi-File Changes
When a change spans multiple files, describe the full scope:
Add a "last_login" timestamp to the User SQLAlchemy model. This will need:
- An Alembic migration
- Updated model definition in app/models/user.py
- Changes to the login route handler to record the timestamp
- Updated Pydantic response schema to include the field
Ask for Self-Checks Before Final Output
Have the model verify its own output before presenting:
Before finalizing:
1. Check for breaking API changes
2. Check for missing imports
3. Check that tests cover happy path and key edge cases
Then provide final patch and test commands.
Use Few-Shot for Output Stability
When formatting matters, examples often beat verbose instructions:
Classify each ticket as A/B/C.
Examples:
Input: "Cannot login after reset"
Output: <answer>A</answer>
Input: "Request for invoice copy"
Output: <answer>C</answer>
Include examples for common failure modes, not only happy paths.
Chain Prompts for Better Quality
For complex writing/reasoning tasks, run staged prompts:
- Draft
- Critique (against a checklist)
- Revise
Prompt chaining gives you control points and makes failures easier to debug.
Evaluate Prompt Changes with a Small Test Set
Treat prompts like code: regression-test them.
- Build 10-30 representative inputs
- Define expected outputs or a scoring metric
- Track pass rate and failure modes
Use code-based checks where possible.
Sources: OpenAI prompt engineering guide, Anthropic prompt engineering overview, Anthropic: XML tags, Anthropic: Prompt chaining, Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, prompt-eng-interactive-tutorial
