Practical patterns and techniques for directing Maestro effectively. This guide assumes you understand Maestro’s basic capabilities. For the philosophy behind these patterns, see Explanation: Prompting Philosophy. For command reference, see Commands Reference.

Structure Your Requests

Use the Four-Component Pattern

Structure complex requests with these components: Goal: The specific outcome you need
"Create a complete Python port of this JavaScript library"
Context: Environment and reference material
"Clone the library from https://github.com/example/lib to /tmp/library. 
The goal is an implementation that passes all tests in /tmp/library/tests."
Constraints: What to avoid
"Use only Python standard library - no external dependencies.
Do not change the public API."
Verification: How to prove it works
"Success criteria: All tests in tests/ pass with pytest, 
code coverage \>90%, type hints on all public functions."

Start with Outcomes, Not Steps

Instead of listing steps:
1. Create auth.py
2. Import bcrypt
3. Write hash function
4. Write verify function
5. Add tests
Focus on the goal:
"Implement password hashing for our authentication system.

Requirements:
- Use bcrypt with cost factor 12
- Hash and verify functions with type hints
- Handle invalid inputs gracefully
- Comprehensive test coverage including edge cases

Success: All tests pass, handles all edge cases, clear documentation."

Make Constraints Explicit

Specify What NOT to Do

Weak (implicit constraints):
"Add caching to the API"
Strong (explicit constraints):
"Add Redis caching to GET /users/:id endpoints.

Constraints:
- Must use existing Redis connection pool (no new connections)
- 5-minute TTL only
- Invalidate cache on user updates (PUT /users/:id)
- Cache misses must not block requests (\<10ms timeout)
- Never cache sensitive fields (password, email)

Validation:
- Integration tests verify invalidation works
- Load test shows \<10ms p99 latency for cache hits
- All existing tests still pass"

State Technology Requirements Upfront

"Implement OAuth2 authentication flow.

Technology requirements:
- Python 3.11+ with FastAPI
- PostgreSQL for session storage (not Redis)
- Standard library cryptography (no external auth libs)
- Type hints required on all functions

Do not use: Flask, JWT libraries, SQLite"

Demand Evidence

Never Accept Claims Without Proof

Maestro claims: “The optimization improved performance” You demand:
"Show me benchmark results comparing before and after implementations.
Use identical test conditions with the same dataset.
Run each 100 times and show mean, p95, and p99 latencies."
Maestro claims: “All tests pass” You demand:
"Display the complete test output showing:
- Number of tests run
- Pass/fail count
- Code coverage report
- Any warnings or skipped tests
- Time taken"
Maestro claims: “The implementation is production-ready” You demand:
"Prove production readiness:
1. Show all tests passing with coverage \>85%
2. Show error handling for network failures, timeouts, invalid inputs
3. Demonstrate graceful degradation when dependencies unavailable
4. Show performance benchmarks under expected load
5. Verify logging at appropriate levels"

Use Reference Patterns

Point to Existing Code

When you have working examples:
"The new email notification system should work like the SMS notification system
in notifications/sms.py - same error handling pattern, same retry logic, 
same monitoring hooks. Adapt that approach for SendGrid email API."
This builds compound advantage - every working component becomes a reference for the next.

Define Function Signatures

Maintain design control by specifying interfaces:
"Write a Python function with this exact signature:

async def download_file(
    url: str,
    destination: Path,
    max_size_bytes: int = 5 * 1024 * 1024,
    timeout_seconds: int = 30
) -> Path

Requirements:
- Use httpx for async HTTP
- Enforce max_size_bytes strictly (raise ValueError if exceeded)
- Return path to downloaded file
- Handle connection errors, timeouts, and HTTP errors appropriately
- Log download progress for files \>1MB"

Handle Complexity

Break Large Tasks into Phases

For substantial projects:
"Build a microservices communication layer for our Node.js application.

Phase 1: Research and Design
Deliverables:
- Analysis of message queue vs HTTP vs gRPC approaches
- Recommendation with trade-off analysis
- High-level architecture diagram showing service interactions
- Technology choices with justification

Do not proceed to Phase 2 until I approve the approach."
After approval:
"Phase 2: Implement Core Infrastructure

Based on our approved design (RabbitMQ with circuit breaker pattern):
- Service discovery using Consul
- Message broker integration with RabbitMQ
- Circuit breaker implementation
- Health check system

Validation: Unit tests for each component, integration test showing services can discover and message each other."

Use Discovery Pattern for Existing Code

Before modifying unfamiliar code:
"I want to add feature X to our codebase.

Before implementing:
1. Clone the repository from \{repo_url\}
2. Explain the current architecture (create a diagram if helpful)
3. Identify where X should integrate (specific files and functions)
4. Propose 2-3 implementation approaches with pros/cons
5. Recommend one approach with detailed justification

Only after I approve the approach, proceed with implementation."

Debug Systematically

Describe Symptoms, Not Diagnoses

Don’t diagnose:
"The database connection pool is misconfigured"
Describe symptoms:
"API requests fail after 10 minutes of idle time with 'connection refused' errors.

Expected: Connections maintained or reconnected automatically
Actual: All requests fail until service restart

Error logs show: [paste actual error]

Fix this so connections work reliably after idle periods."

Use Systematic Investigation

For complex bugs:
"Users report intermittent 500 errors during checkout.

Investigate systematically:
1. Examine logs for the incident time window (2PM-3PM UTC yesterday)
2. Identify error patterns and frequency
3. Check external service health (payment gateway, inventory service)
4. Review recent deployments (last 48 hours)
5. Analyze traffic patterns during failures

Present findings with evidence, then propose fix with validation plan."

Ensure Quality

Specify Test Requirements Upfront

Test-first approach:
"Implement user registration endpoint.

Testing requirements (write tests first):
- Happy path: valid email, strong password
- Invalid email formats (missing @, invalid domain, etc.)
- Weak passwords (too short, no special chars, etc.)
- Duplicate email registration
- Database connection failures
- Email service failures
- Concurrent registration attempts

Implement endpoint to make all tests pass.
Show me test output proving all scenarios covered."

Protect Against Regressions

When modifying existing code:
"Add retry logic to the API client.

Critical requirements:
- All existing tests must pass unchanged
- Add new tests only for retry-specific behavior
- No changes to public API surface
- Backward compatible with existing callers

If any existing test fails, stop and explain why before modifying tests."

Require Comprehensive Validation

Before declaring completion:
"Before claiming this feature is complete:
1. Run the FULL test suite (not just related tests)
2. Show me complete test output with pass/fail counts
3. Run performance benchmarks comparing to baseline
4. Verify error scenarios are handled (show me the error handling tests)
5. Check code coverage (target \>80% for new code)
6. Demonstrate the feature working end-to-end
7. Show that all documentation is updated

I need evidence for each item."

Recover from Problems

When Maestro Gets Stuck

If repeated similar errors or circular debugging:
"Stop. We're not making progress on this approach.

Current issue: [describe what's failing]

Let's reconsider:
1. What have we tried and why did each fail?
2. What assumptions might be wrong?
3. What alternative approaches haven't we tried?
4. What would be a simpler approach?

Recommend a new direction with justification, then wait for my approval before proceeding."

When Requirements Aren’t Clear

Maestro can help clarify:
"I want to improve API performance but I'm not sure where to focus.

Please:
1. Profile the current system (run benchmarks with realistic load)
2. Identify the top 3 bottlenecks with evidence
3. Estimate impact of optimizing each (rough % improvement)
4. Recommend where to focus with reasoning

We'll decide together which optimization to pursue."

Common Workflows

Starting a Feature

"Implement \{feature_name\} for \{context\}.

Requirements:
- \{specific requirement 1\}
- \{specific requirement 2\}
- \{specific requirement 3\}

Constraints:
- \{what not to do\}
- \{architectural boundaries\}
- \{technology limitations\}

Success criteria:
- \{measurable outcome 1\}
- \{measurable outcome 2\}
- \{quality standard\}

Begin by proposing an implementation approach for my review."

Debugging an Issue

"\{Symptom description\} is happening.

Expected behavior: \{what should happen\}
Actual behavior: \{what's happening\}

Context:
- Relevant files: \{list files\}
- Error messages: \{paste errors\}
- Steps to reproduce: \{step by step\}

Fix this issue and add tests to prevent regression.
Show me the tests passing before and after the fix."

Optimizing Performance

"Optimize \{component\} performance.

Current performance: \{measurement with specific test case\}
Target performance: \{goal with same test case\}

Process:
1. Profile current implementation to identify bottlenecks
2. Propose optimizations with expected impact for each
3. Implement changes one at a time
4. Benchmark after each change using the SAME test case
5. Show before/after comparison with evidence

Do not proceed with implementation until I approve your optimization plan."

Reviewing Implementation

"Review this implementation for production readiness:

\{file or description\}

Check for:
- Edge cases and error handling (what could go wrong?)
- Test coverage (are all paths tested?)
- Performance implications (any obvious bottlenecks?)
- Security considerations (any vulnerabilities?)
- Code clarity (is it maintainable?)

Provide specific feedback with code examples where applicable."

Quick Reference Checklist

Before submitting a request, verify:
  • Context is clear: Did I reference relevant files, errors, or documentation?
  • Constraints are explicit: Did I define what NOT to do?
  • Success is defined: Did I specify how to verify completion?
  • Scope is focused: Is this one clear task or multiple bundled together?
  • Quality standards are stated: Did I specify testing, performance, or documentation requirements?

Next Steps

Master effective prompting, then explore: