When a single model gets the wrong answer, asking it to “double-check” rarely helps — it lacks the perspective. Two agents with different roles, going back and forth, often catch what one would miss. This pattern wires a proposer and a critic withDocumentation Index
Fetch the complete documentation index at: https://cognis.vasanth.xyz/llms.txt
Use this file to discover all available pages before exploring further.
RoundRobin orchestration.
What you’ll build
Two agents, three rounds. Round 1: proposer drafts an answer. Round 2: critic finds problems. Round 3: proposer revises in response. Final reply is the proposer’s revised answer.How it works
RoundRobinalternates agents in registration order forroundscycles. Each gets the running transcript.- The proposer’s role is to commit to a position. The critic’s job is only to find weaknesses. Forcing the asymmetry is the whole point.
- The transcript is the shared state. No fancy passing — each agent sees what the other said.
The code
RoundRobin::default() does 3 rounds. For more, use RoundRobin::new(rounds).
How it works
- Both agents have low
max_iterations(2). Each round is one model call plus optional tool use. We don’t want either side to spiral inside its own loop. - The system prompts encode the asymmetry. Without it, both agents tend to “be balanced” and the disagreement collapses. The “do NOT propose alternatives” line in the critic’s prompt is load-bearing.
- The final reply is whoever spoke last. With 3 rounds and the proposer first, that’s the proposer’s revised answer — which is what you usually want.
Variations
| Variation | How |
|---|---|
| More rounds | RoundRobin::new(5) for deeper iteration. |
| Three voices | Add a third agent (a moderator, a domain expert) — round-robin still works. |
| Vote at the end | Replace the final round with ParallelVote over multiple proposer instances. |
| Tool-using critic | Give the critic a search tool so it can fact-check claims. |
| Stop early on agreement | Implement a custom HandoffStrategy that watches for “I agree” and terminates. |
When this pattern shines
- Open-ended questions without a known correct answer.
- Trade-off decisions where the right call depends on context.
- Hallucination-prone domains — the critic’s job is hallucination-spotting.
When it’s overkill
- Simple lookups (a single agent with a tool is faster).
- Math and code (correctness, not perspective, is the bottleneck — use a calculator or a compiler).
- Real-time chat (debate is slow; users wait for round 3).
See also
Multi-agent orchestration
Strategies, custom handoffs, AgentBus.
Patterns → Research assistant
Sequential pipeline alternative.
Building agents → Memory
Give each agent its own memory.