About Strategy Selection

Different projects benefit from different session structures. This is not a matter of preference — it reflects genuine differences in how different types of work unfold. The key variables are:
  • Scope: How much code needs to change?
  • Dependencies: Do changes depend on each other, or are they independent?
  • Risk: How much damage can a wrong approach cause?
  • Validation complexity: How hard is it to prove the work is correct?
These variables determine whether you should use a single session or multiple sessions, whether you should plan extensively or iterate quickly, and how much oversight to apply.

About Greenfield vs. Existing Codebases

Greenfield projects (starting from scratch) and modifications to existing codebases require fundamentally different approaches. Greenfield projects offer freedom but lack guardrails. There are no existing tests to preserve, no patterns to follow, and no architecture to understand. The risk is not breaking things — it is building the wrong thing. Strategy should emphasize planning and specification before implementation. Existing codebases provide context and constraints. There are patterns to follow, tests to preserve, and architecture to respect. The risk is regression — breaking something that works. Strategy should emphasize discovery and understanding before modification, and continuous test validation during changes. The single most common mistake with existing codebases is modifying before understanding. Taking time upfront to have Maestro analyze the architecture, identify integration points, and understand existing patterns prevents costly rework later.

About Single vs. Multiple Sessions

Session capacity is finite by design. This creates a natural question: should you use one long session or multiple focused sessions? Single sessions maintain complete context. Maestro remembers everything that happened, every decision made, every constraint established. This is valuable for work where later stages depend on understanding from earlier stages. Multiple sessions provide fresh capacity and clearer focus. Each session starts with full capacity and a single concern. This is valuable for independent work streams that can proceed in parallel. The trade-off is context versus capacity. A single session preserves context but gradually fills capacity. Multiple sessions preserve capacity but lose cross-session context (though you can bridge sessions with synopses and file uploads). General guidance: Use single sessions for work where everything is interconnected. Use multiple sessions for independent components that can be integrated later.

About Phased Approaches

Complex features benefit from explicit phases: research, design, implementation, validation, integration. This is not bureaucratic overhead — it prevents a specific failure mode where you invest hours of implementation before discovering a fundamental design flaw. The phase boundaries serve as quality gates. Research should produce a clear understanding of trade-offs. Design should produce a specification. Implementation should follow the specification. Validation should prove the implementation correct. Each transition is a natural checkpoint where you can course-correct. Phases also help with capacity management. You can compact early phases before starting later ones, preserving key decisions while freeing space for implementation details.

About Parallel vs. Serial Work

Independent features (authentication, caching, notifications) can be developed in parallel sessions. This is faster and provides better capacity management. Integration happens in a separate session where you bring the pieces together. Dependent features (where the API layer depends on the database layer, which depends on the data model) must be developed serially. Each layer builds on the previous one, and the context from earlier work is essential. The hybrid approach — serial phases within parallel sessions — works well for large systems. Define the data model and API contracts first (serial), then implement services in parallel, then integrate (serial).

About Performance Optimization Sessions

Performance work has a unique structure because it is inherently measurement-driven. The strategy is always:
  1. Establish baseline with reproducible benchmarks
  2. Profile to identify actual bottlenecks (not guessed ones)
  3. Change one thing at a time
  4. Measure after each change
  5. Compare to baseline
This structure exists because intuition about performance is unreliable. Code that looks slow may be fast. Code that looks efficient may be the bottleneck. Only measurement tells the truth.

About Multi-Session Knowledge Transfer

When work spans multiple sessions, knowledge transfer becomes important. The /synopsis command creates a comprehensive summary that can be uploaded to a new session. This provides enough context for the new session to continue intelligently. The key to effective transfer is capturing decisions and rationale, not just implementation details. “We chose Redis over Memcached because our workload needs persistence” is more valuable in a transfer than a list of files changed.

Anti-Patterns

The Everything Session: Trying to implement five features, refactor ten files, fix twenty bugs, and update all documentation in a single session. This overwhelms capacity and diffuses focus. The Scope-Creep Session: Starting with “implement login” and progressively adding OAuth, social login, 2FA, and password recovery without completing the original scope. The Validation-Light Session: Implementing a feature, declaring it “looks good,” and creating a PR without running tests or benchmarks. Validation is not optional. The Assumption Session: Accepting “this probably works” or “performance is likely fine” instead of demanding evidence. Assumptions are the enemy of quality.

Further Reading