Convoys: Multi-Agent Orchestration
Convoys enable Panopticon to run multiple AI agents in parallel for complex tasks like code review. Instead of a single agent doing everything, specialized agents focus on specific concerns and a synthesis agent combines their findings.Why Convoys?
When reviewing code, a single AI agent must context-switch between:- Checking for logic errors
- Looking for security vulnerabilities
- Analyzing performance issues
- Shallow reviews - Can’t go deep on everything
- Missed issues - Focus on one area, miss others
- Long sequential execution - Can’t parallelize
- 3 focused agents review in parallel (10x faster)
- Each agent goes deep in their domain
- Synthesis agent combines findings with prioritization
Quick Start
How Code Review Convoy Works
Phase 1: Specialized Reviews (Parallel)
Three specialized agents run simultaneously, each focusing on a specific concern:| Agent | Model | Focus Areas |
|---|---|---|
| Correctness | Haiku | Logic errors, edge cases, type safety, null handling |
| Security | Sonnet | OWASP Top 10, injection vulnerabilities, XSS, auth issues |
| Performance | Haiku | N+1 queries, blocking operations, memory leaks, algorithm complexity |
- Reviews the specified files independently
- Writes findings to
.claude/reviews/<timestamp>-<domain>.md - Can go deep without worrying about other concerns
- Runs in parallel (total time = slowest agent, not sum of all)
Phase 2: Synthesis (Sequential)
After all specialized reviews complete, a synthesis agent:- Reads all review files - Ingests findings from all three agents
- Removes duplicates - Same issue found by multiple reviewers
- Prioritizes findings - Orders by severity × impact
- Generates unified report - Single actionable document
.claude/reviews/<timestamp>-synthesis.md
What is Synthesis?
Synthesis is the process of combining findings from multiple parallel agents into a single, prioritized, actionable report. Without synthesis, after 3 parallel reviews you get:- 3 separate markdown files to read
- Duplicate findings (same issue reported differently)
- No prioritization (which to fix first?)
- Mental overhead to merge them yourself
- Single unified report
- Deduplicated findings
- AI-prioritized by severity × impact
- Clear action items
Built-in Convoy Templates
| Template | Agents | Use Case |
|---|---|---|
code-review | correctness, security, performance, synthesis | Comprehensive code review |
planning | planner | Codebase exploration and planning |
triage | (dynamic) | Parallel issue triage |
health-monitor | monitor | Check health of running agents |
code-review Template
The most commonly used convoy. Runs three specialized code reviewers in parallel, then synthesizes results..claude/reviews/<timestamp>-correctness.md.claude/reviews/<timestamp>-security.md.claude/reviews/<timestamp>-performance.md.claude/reviews/<timestamp>-synthesis.md(prioritized combined report)
Convoy Commands
Custom Convoy Templates
Create custom templates in~/.panopticon/convoy-templates/:
Template Structure
agents - Array of agent configurations:Convoy Lifecycle
Monitoring Convoys
Dashboard integration: The Panopticon dashboard shows:- Active convoys with progress
- Phase completion (parallel vs synthesis)
- Individual agent status
- Output file links
Performance Benefits
Sequential review (single agent):- 10 minutes for correctness
- 10 minutes for security
- 10 minutes for performance
- Total: 30 minutes
- Phase 1: 10 minutes (all three run simultaneously)
- Phase 2: 3 minutes (synthesis)
- Total: 13 minutes (2.3x faster)
Use Cases
Code Review:- Pre-merge quality checks
- Security audits
- Performance optimization reviews
- Explore multiple architectural approaches simultaneously
- Research competing libraries in parallel
- Evaluate different implementation strategies
- Categorize backlog items in parallel
- Estimate complexity across multiple issues
- Prioritize work queue
- Check status of all running agents
- Detect stuck or crashed agents
- Analyze system health across projects
Best Practices
When to use convoys:- Task can be split into independent concerns (security, performance, etc.)
- You need comprehensive coverage (not just surface-level review)
- Speed matters (parallel execution valuable)
- Results need synthesis (combining findings)
- Task is inherently sequential (one step depends on another)
- Simple, focused review (single-agent is faster to set up)
- Findings don’t benefit from synthesis (independent results)
- Keep agents focused - Each should have a clear, narrow responsibility
- Balance workload - Aim for similar execution times across parallel agents
- Design for synthesis - Structure output so synthesis can combine effectively
- Monitor costs - Multiple agents = multiple API calls
Troubleshooting
Convoy stuck in “running” state:- Check that all parallel agents completed successfully
- Verify output files exist in expected location
- Review synthesis agent logs in tmux session
- Check for permission issues (can they access files?)
- Verify file patterns match actual files
- Review agent prompts for clarity
Related Guides
- Convoy Commands - CLI reference
- Skills - Subagents and skill system
- Specialists - Long-running specialist agents
- Cloister - AI lifecycle management