ESC
Type to search guides, tutorials, and reference documentation.
Verified by Garnet Grid

Prompt Engineering Patterns

Design effective prompts for large language models in production systems. Covers chain-of-thought prompting, few-shot learning, system prompt design, structured output, prompt testing, and the patterns that make LLM interactions reliable and repeatable.

Prompt engineering is the difference between an LLM that sometimes works and one that reliably works. Production prompt engineering is not about clever tricks — it is about systematic design, testing, and iteration. The prompt is your API contract with the model: vague prompts produce vague results.


Core Prompting Patterns

Zero-Shot:
  "Classify this support ticket: [ticket text]"
  No examples provided, relies on model's training
  Use when: Simple tasks, well-known domains
  
Few-Shot:
  "Here are examples of classifications:
   Ticket: 'My payment failed' → billing
   Ticket: 'Cannot login' → authentication
   Ticket: 'Feature request for dark mode' → feature_request
   
   Classify this ticket: [new ticket text]"
  Examples guide the model's behavior
  Use when: Domain-specific, nuanced classification

Chain-of-Thought (CoT):
  "Analyze this code for security vulnerabilities.
   Think step by step:
   1. Identify input handling
   2. Check for injection points
   3. Evaluate authentication
   4. Assess data exposure
   Then provide your final assessment."
  Forces reasoning before conclusion
  Use when: Complex analysis, multi-step logic

Role Prompting:
  "You are a senior security engineer reviewing code.
   You are thorough, skeptical, and assume the code 
   will be attacked by sophisticated adversaries."
  Sets expertise level and perspective
  Use when: Specialized domain knowledge needed

System Prompt Design

class SystemPromptBuilder:
    """Template for production system prompts."""
    
    def build(self, config):
        return f"""
You are {config['role']}.

## Task
{config['task_description']}

## Rules
{self.format_rules(config['rules'])}

## Output Format
{config['output_format']}

## Examples
{self.format_examples(config['examples'])}

## Constraints
- If you are unsure, say "I don't know" rather than guessing
- Never fabricate information not in the provided context
- Always cite the source document for factual claims
- If the input is unclear, ask for clarification
"""
    
# Example:
prompt = SystemPromptBuilder().build({
    "role": "a customer support agent for Acme Corp",
    "task_description": "Answer customer questions about our products using the provided knowledge base. Be helpful, concise, and accurate.",
    "rules": [
        "Only answer questions about Acme products",
        "Never discuss competitor products",
        "For billing issues, provide the support link",
        "For urgent safety issues, escalate to human agent",
    ],
    "output_format": "Respond in 2-3 sentences. Include relevant documentation links.",
    "examples": [
        {"input": "What's your return policy?", "output": "We offer..."},
    ],
})

Structured Output

from pydantic import BaseModel
from openai import OpenAI

# Force LLM to output valid JSON matching a schema

class SentimentAnalysis(BaseModel):
    sentiment: str  # "positive", "negative", "neutral"
    confidence: float  # 0.0 to 1.0
    key_phrases: list[str]
    reasoning: str

client = OpenAI()

response = client.beta.chat.completions.parse(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "Analyze the sentiment of customer reviews."},
        {"role": "user", "content": "The product arrived broken and support was unhelpful."},
    ],
    response_format=SentimentAnalysis,
)

result = response.choices[0].message.parsed
# SentimentAnalysis(
#   sentiment="negative",
#   confidence=0.95,
#   key_phrases=["arrived broken", "support was unhelpful"],
#   reasoning="Customer experienced product defect and poor support..."
# )

Anti-Patterns

Anti-PatternConsequenceFix
Vague promptsInconsistent, unpredictable outputSpecific instructions with examples
No output format specModel returns free-text, hard to parseStructured output (JSON schema)
No error handlingModel hallucinations treated as factsValidate output, confidence thresholds
Prompt in code onlyCannot iterate without deploymentPrompt management system, version control
No prompt testingRegression from prompt changesEvaluation suite with ~50 test cases
Temperature too highNon-deterministic production resultsTemperature 0 for deterministic tasks

Prompt engineering for production systems is software engineering: version-controlled, tested, monitored, and iterated based on real-world performance data.

Jakub Dimitri Rezayev
Jakub Dimitri Rezayev
Founder & Chief Architect • Garnet Grid Consulting

Jakub holds an M.S. in Customer Intelligence & Analytics and a B.S. in Finance & Computer Science from Pace University. With deep expertise spanning D365 F&O, Azure, Power BI, and AI/ML systems, he architects enterprise solutions that bridge legacy systems and modern technology — and has led multi-million dollar ERP implementations for Fortune 500 supply chains.

View Full Profile →