Advanced patterns for iterative development and quality control with Maestro. This guide assumes familiarity with basic request patterns. See How to Write Effective Requests for foundational techniques.

Iterate Effectively

Challenge and Refine

Maestro’s strength emerges through iteration:
Initial implementation → Your challenge → Refinement → Validation → Repeat if needed
Example cycle:
  1. Maestro implements caching layer
  2. You challenge:
"This implementation doesn't handle Redis connection failures.
What happens when Redis is unavailable? The application shouldn't crash."
  1. Maestro refines adding circuit breaker pattern
  2. You validate:
"Show me tests that prove graceful degradation when Redis is down.
Simulate connection failures and verify the application continues working."
  1. Maestro demonstrates with comprehensive test output

Push Back on Quality Issues

When to push back:
  • Unvalidated claims: “It should work” without evidence
  • Incomplete testing: Only happy path tested
  • Missing edge cases: Error scenarios not handled
  • Performance assertions without data: “It’s fast” without benchmarks
  • Shortcuts compromising quality: Skipped tests, hardcoded values
  • Unclear or confusing code: Hard to understand or maintain

Effective Push-Back Technique

Ineffective:
"This isn't good enough"
Effective:
"The error handling is incomplete. What happens when:
- Redis connection times out?
- Redis returns unexpected response format?
- Network is unavailable?

Add tests simulating each failure scenario.
Prove the system handles them gracefully without crashing or corrupting data."
Ineffective:
"Did you test this?"
Effective:
"Run the full test suite and show me:
- Complete test output (not just summary)
- Pass/fail count with test names
- Code coverage report for new code
- Any warnings or skipped tests

Then add integration tests verifying cache invalidation works across multiple requests."

Break Down Complex Projects

Phase-Based Development

For substantial projects: Example: Microservices Architecture Phase 1: Research and Design (30-60 minutes)
"Research microservices communication patterns for Node.js applications.

Deliverables:
- Analysis of 3 viable approaches (message queue, HTTP, gRPC)
- Recommendation with detailed trade-off analysis
- High-level architecture diagram
- Key technology choices justified

Wait for my approval before Phase 2."
Phase 2: Core Infrastructure (2-4 hours)
"Implement the service communication layer using our approved approach (RabbitMQ).

Scope:
- Service discovery mechanism
- Inter-service messaging with RabbitMQ
- Error handling and retry logic
- Health check system

Validation: All components tested in isolation, integration test showing services can communicate."
Phase 3: Service Implementation (3-6 hours)
"Implement user service following the established pattern.

Requirements:
- Follow Phase 1 architecture
- Use Phase 2 communication layer
- Comprehensive tests (unit + integration)
- API documentation

Validation: Full test suite passes, service integrates cleanly with communication layer."

Parallel Sessions Strategy

For independent features:
Session A: User Authentication System
- Focus: Auth logic, JWT, permissions
- Can develop independently

Session B: Database Layer
- Focus: Schema, migrations, queries
- Separate concerns

Session C: Integration
- Clone deliverables from A and B
- Connect them together
- Integration testing
Benefits: Better capacity management, clearer focus per session, easier to resume specific work streams, reduced context switching.

Work with Existing Codebases

Understand Before Modifying

Discovery workflow:
"I want to add feature X to our codebase.

Before implementing:
1. Clone the repository from \{repo_url\}
2. Analyze the current architecture
3. Explain how feature X should integrate (specific files and functions)
4. Identify potential challenges or conflicts with existing code
5. Propose 2-3 implementation approaches
6. Recommend one with detailed justification

Only proceed with implementation after I approve the approach."

Incremental Integration Pattern

For large changes:
"Refactor the database layer to use connection pooling.

Approach:
1. Analyze current implementation (show me the relevant code)
2. Write tests capturing existing behavior (these are regression tests)
3. Implement connection pooling while preserving all behavior
4. Verify all original tests still pass (must pass without modification)
5. Add new tests for pooling-specific logic
6. Benchmark before/after performance with realistic load

Gate: No merge until all original functionality verified and performance improved."

Preserve Existing Tests

Critical rule: existing tests protect against regressions.
"Add retry logic to the API client.

Requirements:
- All existing tests must pass unchanged
- Add new tests only for retry-specific behavior
- Verify backward compatibility (no breaking changes to callers)
- No changes to public API surface

If any existing test fails, stop immediately and explain why before modifying the test."

Validate Comprehensively

Full Validation Checklist

Before accepting “complete”:
"Before claiming this feature is complete:
1. Run the FULL test suite (not just files you changed)
2. Show me complete test output (pass/fail counts, test names)
3. Run performance benchmarks (compare to baseline)
4. Check for regressions (any degradation?)
5. Verify edge cases handled (show me the edge case tests)
6. Confirm error scenarios tested (show error handling)
7. Review code coverage (new code should be \>80% covered)

Provide evidence for each item."

Test-First Development

Ensure testing isn’t an afterthought:
"Implement user registration endpoint.

Workflow:
1. Write comprehensive tests FIRST (TDD style)
2. Tests must cover:
   - Happy path (valid email and password)
   - Invalid inputs (malformed email, weak password)
   - Duplicate registrations
   - Database failures
   - Email service failures
3. Show me all tests (they should fail initially)
4. Implement endpoint to make tests pass
5. Show me final test output with all tests passing"

Benchmark-Driven Optimization

For performance work:
"Optimize the image processing pipeline.

Current baseline: 2 seconds per image (measured with test_images/sample.jpg)
Target: \<500ms per image

Process:
1. Profile current implementation (show me profiling output)
2. Identify bottlenecks with evidence (not guesses)
3. Propose optimizations with expected impact
4. Implement changes ONE AT A TIME
5. Benchmark after each change using the SAME test image
6. Show before/after comparison for each optimization

Target: \<500ms per image
Acceptance: Must show actual benchmark results proving improvement."

Handle Challenges

When Maestro Gets Stuck

Symptoms:
  • Repeated similar errors
  • Circular debugging
  • No progress after multiple attempts
Your response:
"Stop. Let's step back and reconsider.

Current issue: \{describe what's failing\}

Analysis:
1. What have we tried? (List all attempts)
2. Why did each fail?
3. What assumptions might be wrong?
4. What alternative approaches haven't we considered?

Recommend a new direction with reasoning, then wait for my approval before proceeding."

When Tests Fail Unexpectedly

Don’t let Maestro:
  • Skip failing tests
  • Comment them out
  • Change tests to match wrong implementation
Do require:
"These test failures are real signals about problems.

Do NOT:
- Skip or comment out failing tests
- Modify tests to match current implementation
- Dismiss failures as 'edge cases'

DO:
- Understand WHY each test fails
- Fix the implementation (not the tests)
- Ensure all tests pass
- Add more tests if coverage is insufficient

Show me your analysis of why tests are failing before proposing fixes."

When Requirements Are Unclear

Let Maestro help clarify:
"I want to improve system performance but I'm not sure where to focus.

Please:
1. Profile the current system under realistic load
2. Identify the top 3 bottlenecks with evidence (profiling data, metrics)
3. Estimate impact of optimizing each (rough % improvement expected)
4. Recommend where to focus with detailed reasoning
5. Outline an optimization approach for your recommendation

We'll decide together which direction to pursue."

Systematic Debugging

Evidence-Based Debug Pattern

When something doesn’t work:
"Authentication is failing but I'm not sure why.

Debug systematically:
1. Check server logs for authentication attempts (show me the relevant logs)
2. Verify request format matches API expectations (show request/response)
3. Test each component in isolation (token generation, verification, etc.)
4. Identify the exact failure point with evidence
5. Propose fix with explanation of root cause

Show me evidence at each step - don't guess."

Hypothesis-Driven Investigation

For complex bugs:
"Users report intermittent connection failures during high load.

Scientific approach:
1. Form 3 hypotheses for what might cause this:
   - Hypothesis A: \{description\}
   - Hypothesis B: \{description\}
   - Hypothesis C: \{description\}
2. Design experiments to test each hypothesis
3. Run experiments with controlled conditions
4. Analyze results with evidence
5. Implement fix for confirmed root cause
6. Add regression tests

Document the entire investigation process."

Build Long-Term Value

Document as You Go

"As you implement this feature:
- Add inline comments for non-obvious logic (why, not what)
- Update README with new configuration requirements
- Document API changes in API.md
- Add usage examples to examples/ directory
- Update architecture diagrams if structure changed

Documentation quality is part of the definition of 'done'."

Create Maintainable Code

Specify quality standards:
"Implementation quality requirements for this feature:
- Clear, descriptive names (no abbreviations unless standard)
- Single Responsibility Principle (each function does one thing)
- DRY (Don't Repeat Yourself) - extract common patterns
- Comprehensive error handling with specific error messages
- Logging at appropriate levels (debug, info, warning, error)
- Type hints on all function signatures (Python) or TypeScript types

Code will be reviewed against these standards before acceptance."

Knowledge Transfer

After major implementations:
"After implementing the OAuth flow, create a knowledge document:

Contents:
- How the OAuth flow works (sequence diagram if helpful)
- Key decision points and why we chose our approach
- Security considerations and why they're important
- Common pitfalls and how to avoid them
- Debugging tips for common issues
- Links to relevant documentation

This helps future developers (including me) maintain and extend this code."

Measure Success

Quality Indicators

High-quality outcomes show:
  • All tests passing (not just “should pass”)
  • Benchmarks meeting targets (with evidence)
  • Edge cases explicitly handled
  • Error scenarios tested
  • Code follows project conventions
  • Documentation accurate and complete
Red flags:
  • Skipped or commented-out tests
  • Performance claims without measurements
  • Missing error handling
  • Incomplete documentation
  • Shortcuts “to save time”
  • Assertions without proof

Post-Implementation Checklist

After significant work:
  • Run full test suite (all tests pass)
  • Review all file changes (understand what changed)
  • Verify documentation updated
  • Check performance if relevant (benchmarks run)
  • Validate against success criteria
  • Clean up WIP/debug code
  • Ready for PR or delivery

Next Steps

Master these collaboration patterns, then explore: