Proven approaches for organizing work across different project types and goals. This guide provides concrete patterns for structuring your Maestro sessions. For the principles behind these strategies, see Explanation: Session Management.

Greenfield Projects

Starting from Scratch

When building new projects with no existing codebase: Phase 1: Research and Design (30-60 minutes)
"Research best practices for \{project type\}. Recommend tech stack with justification based on:
- Our team's expertise (primarily Python)
- Expected scale (\{description\})
- Performance requirements (\{requirements\})
- Deployment constraints (\{constraints\})"
Phase 2: Architecture (30-60 minutes)
"Create technical specification covering:
- System architecture with diagram
- Data models and relationships
- API design (endpoints, request/response formats)
- Technology choices with rationale
- Testing strategy
- Deployment approach

Wait for my approval before implementation."
Phase 3: Implementation (2-8 hours)
"Implement the approved specification with:
- Complete implementation of all components
- Comprehensive test coverage (\>80%)
- Documentation for all public APIs
- Example usage
- Deployment instructions"
Phase 4: Validation (1-2 hours)
"Validate the implementation:
1. Run full test suite - show results
2. Benchmark performance against requirements
3. Verify all requirements met
4. Check documentation is complete and accurate
5. Test deployment process"

Example: REST API Project

Session goal: Build rate-limited REST API for user management

Turn 1: Research
"Research Python frameworks for REST APIs handling 10k requests/sec. 
Recommend FastAPI vs Flask vs Django based on our team's Python experience."

Turn 2: Specification
"Create detailed spec for user management API with:
- JWT authentication
- Rate limiting (100 requests/minute per user)
- PostgreSQL storage
- CRUD operations for users
Include data models, endpoint specs, testing requirements."

Turn 3-5: Implementation
"Implement the spec with all files, tests, and documentation."

Turn 6: Validation
"Run all tests, benchmark under load, verify rate limiting works. Show evidence."

Turn 7: Package
"Create deployment documentation and prepare for production."

Existing Codebases

Understand First, Modify Second

Discovery workflow:
"Clone the repository from \{repo_url\}

Before making changes, help me understand:
1. Overall architecture (create diagram if helpful)
2. How authentication currently works
3. Where business logic lives (key files and patterns)
4. Test structure and current coverage
5. Dependencies and external integrations

Provide a clear summary before we discuss changes."

Feature Addition to Existing Code

For well-tested codebases:
Turn 1: Clone and understand architecture

Turn 2: "Identify where feature X should integrate. Show me:
- Specific files and functions to modify
- Existing patterns to follow
- Potential conflicts or challenges"

Turn 3: "Design feature following established patterns. Show me the design before implementing."

Turn 4: "Implement with tests matching project style"

Turn 5: "Verify all existing tests still pass, show coverage for new code"

Turn 6: "Create PR with clear description"
For poorly-tested codebases: Add test creation phase:
Turn 1: Clone and understand

Turn 2: "Before implementing feature, create comprehensive tests for the code we'll modify.
This establishes baseline behavior and protects against regressions."

Turn 3: "Verify new tests pass with current implementation"

Turn 4-6: Proceed with feature implementation

Incremental Integration Pattern

For large changes to existing code:
"Refactor the database layer to use connection pooling.

Approach:
1. Analyze current implementation - show me the relevant code
2. Write tests capturing ALL existing behavior (regression protection)
3. Run tests to establish baseline - all must pass
4. Implement pooling while preserving behavior
5. Verify all original tests still pass unchanged
6. Add new tests for pooling-specific logic
7. Benchmark before/after with realistic load

Gate: No PR until all original tests pass and performance improves."

Research and Analysis Projects

Competitive Analysis

"We need caching for our user profile API that currently hits the database on every request.

Research caching solutions:
1. Identify top 3 options (Redis, Memcached, in-memory)
2. For each, analyze:
   - Performance characteristics
   - Operational complexity
   - Cost implications
   - Integration effort
3. Implement proof-of-concept for top 2
4. Benchmark with our actual traffic pattern (show test data)
5. Test failure scenarios
6. Recommend solution with evidence

Provide final recommendation with benchmark data supporting the decision."

Technical Due Diligence

Before adopting a library or framework:
"Evaluate \{library\} for our use case:

Assessment:
- Review documentation quality and completeness
- Check maintenance (last update, open issues, response time)
- Analyze performance benchmarks (find existing or create)
- Test integration with our stack
- Identify potential issues or limitations
- Compare to top 2 alternatives with same criteria

Recommend: Adopt, investigate further, or avoid - with detailed reasoning."

Migration Projects

System Migration Strategy

For large-scale migrations, use phased approach: Phase 1: Assessment
"Analyze current \{old system\} implementation:

Deliverables:
- Document all functionality and behavior
- Identify all dependencies (internal and external)
- Create comprehensive test coverage for existing behavior (if missing)
- Document edge cases and error handling
- Establish performance baseline"
Phase 2: Parallel Implementation
"Implement \{new system\} alongside existing:

Approach:
- Keep old code functional (don't remove yet)
- New implementation independent and isolated
- Comprehensive tests for new system
- Feature parity with old system
- Performance meets or exceeds baseline"
Phase 3: Integration
"Create feature flag system for gradual rollout:

Implementation:
- Toggle between old and new implementations
- Gradual rollout capability (percentage-based)
- Monitoring and metrics for both systems
- Quick rollback if issues detected"
Phase 4: Validation
"Validate new system thoroughly:

Validation:
- All original tests pass with new system
- Performance meets or exceeds old system (show benchmarks)
- No regressions in functionality
- Error handling equivalent or better
- Production-like load testing passes"
Phase 5: Cutover
"Remove old implementation:

Cleanup:
- Delete deprecated code
- Remove feature flags
- Update all documentation
- Remove old dependencies
- Clean up test fixtures"

Database Migration Example

Session 1: Preparation
- Clone repository
- Document current schema completely
- Identify all queries in codebase
- Create representative test dataset
Session 2: Schema Migration
- Convert schema to new database system
- Create migration scripts
- Test with sample data
- Verify data integrity
Session 3: Code Migration
- Update all queries for new system
- Handle system-specific syntax
- Migrate transactions and locking
- Update connection pooling
Session 4: Validation
- Run full test suite against new database
- Benchmark performance vs old system
- Verify data integrity with large dataset
- Test all edge cases
Session 5: Deployment Planning
- Create deployment runbook
- Document rollback procedure
- Create monitoring dashboards
- Package migration artifacts

Performance Optimization Projects

Systematic Optimization

Workflow:
Turn 1: Establish Baseline
"Profile current implementation under realistic load. 
Create performance baseline with specific metrics (latency, throughput, resource usage)."

Turn 2: Identify Bottlenecks
"Analyze profiling data. Identify top 3 bottlenecks with evidence.
Explain why each is a bottleneck (not assumptions)."

Turn 3: Propose Solutions
"For each bottleneck, propose optimization with:
- Expected impact (rough % improvement)
- Implementation complexity
- Risks or trade-offs
Recommend starting order."

Turn 4: Implement One by One
"Implement optimization #1 only.
Benchmark with same test case as baseline.
Show before/after comparison."

Turn 5: Validate and Continue
"If improvement confirmed, move to optimization #2.
If not, analyze why and adjust approach.
Continue until target performance reached."

Example: API Performance Optimization

Current: API responds in 500ms average
Goal: Reduce to \<100ms

Turn 1: "Profile the API under realistic load. Show bottlenecks."
Result: Database queries taking 450ms

Turn 2: "Analyze the slow queries. What makes them slow?"
Result: N+1 query pattern in user relationship loading

Turn 3: "Optimize with JOIN instead of N+1. Benchmark before/after."
Result: Queries now 50ms (90% improvement)

Turn 4: "Add Redis caching for user data. Benchmark with cache."
Result: Cache hits \<10ms

Turn 5: "Run full benchmark suite. Compare to baseline."
Result: 500ms → 45ms average (91% improvement achieved)

Debugging Projects

Systematic Bug Investigation

Bug report: Users can't log in intermittently

Turn 1: Reproduce
"Set up test environment and attempt to reproduce the intermittent login failure.
Try multiple times to understand conditions."

Turn 2: Gather Evidence
"Check systematically:
- Server logs during failure periods
- Database connection status
- Authentication service health
- Network latency or timeouts
- Resource constraints (CPU, memory)
Show me evidence for each."

Turn 3: Form Hypotheses
"Based on evidence, form 3 hypotheses for root cause.
List in order of likelihood with reasoning."

Turn 4: Test Hypotheses
"Test each hypothesis systematically with controlled experiments.
Show results and conclusions."

Turn 5: Implement Fix
"Implement fix for confirmed root cause.
Add regression test that would have caught this bug."

Turn 6: Validate
"Verify:
- Bug no longer reproduces (try 50+ times)
- Regression test catches the failure condition
- No new bugs introduced (all tests pass)
- Performance not degraded"

Multi-Session Strategies

Parallel Development

For independent features:
Session A: User Authentication System
- Auth logic, JWT, permissions
- Develops independently
- Can merge first

Session B: Payment Processing
- Payment flow, Stripe integration
- Independent development
- Merges separately

Session C: Email Notifications
- Email templates, SendGrid
- Independent
- Own timeline
Benefits: Better capacity management per session, clearer focus, independent PRs, parallel progress.

Serial Deep Work

For complex, dependent features:
Session 1: Foundation (2-4 hours)
- Core data models and relationships
- Database schema
- Basic API structure
- Integration tests for foundation

Session 2: Business Logic (3-6 hours)
- Core feature implementation
- Business rules and validation
- Comprehensive testing
- Error handling

Session 3: Integration (2-4 hours)
- External service connections
- Monitoring and alerting
- Performance optimization
- Documentation

Session 4: Polish (1-2 hours)
- Code review and cleanup
- Final performance testing
- Complete documentation
- Ready for production

Domain-Specific Patterns

Machine Learning Projects

Session 1: Data Pipeline
- Data loading from sources
- Preprocessing and cleaning
- Feature engineering
- Train/validation/test splitting

Session 2: Model Development
- Model architecture
- Training loop implementation
- Hyperparameter tuning
- Experiment tracking

Session 3: Evaluation
- Metrics calculation
- Validation on test set
- Performance analysis
- Error analysis

Session 4: Production Readiness
- Model serialization
- Inference API implementation
- Monitoring and logging
- Deployment preparation

Distributed Systems

Session 1: Service Definition
- API contracts (OpenAPI/gRPC)
- Data models and schemas
- Communication protocols
- Service boundaries

Session 2: Core Service
- Primary service implementation
- Unit tests
- Integration tests
- Error handling

Session 3: Secondary Services
- Dependent services
- Inter-service communication
- Failure handling
- Retry logic

Session 4: System Integration
- End-to-end testing
- Load testing
- Failure mode testing
- Monitoring and alerting

Avoid These Anti-Patterns

The Everything Session

Don’t bundle:
Single session trying to:
- Implement 5 features
- Refactor 10 files
- Fix 20 bugs
- Update all documentation
Do focus:
One session per major goal:
- Session A: Implement feature X
- Session B: Refactor authentication
- Session C: Fix critical bugs

Scope Creep

Don’t expand mid-session:
Start: "Implement login"
Midway: "Also add OAuth"
Later: "And social login"
End: "Plus 2FA and recovery"
Do complete then expand:
Session 1: Basic login (complete)
Session 2: OAuth integration (complete)
Session 3: Social providers (complete)
Session 4: 2FA system (complete)

Validation-Light

Don’t skip validation:
Implement → "Looks good" → Create PR
(No tests, no benchmarks, no verification)
Do validate systematically:
Implement → Full tests → Benchmarks → Edge cases → PR

Measure Session Success

Quality Indicators

Good sessions produce:
  • Working code (all tests pass)
  • Evidence of performance (benchmarks shown)
  • Comprehensive documentation
  • Clear understanding
  • Reusable knowledge
Poor sessions show:
  • Code “should work” (not tested)
  • Optimization without measurements
  • Missing documentation
  • Unclear accomplishments

Post-Session Checklist

After significant work:
  • All tests passing (full suite run)
  • Performance validated (benchmarks if relevant)
  • Documentation updated
  • Code follows conventions
  • Edge cases handled
  • Ready for PR or delivery

Next Steps

Apply these strategies to your work: