Multi-Agent Orchestration
Combine agents into structured flows using orchestrator agents and delegation patterns. Sequential pipelines, parallel fan-out, iterative loops, dynamic handoff, and coordinator-dispatcher are all supported within a single ADK workflow.
Sequential pipeline
A SequentialAgent runs sub-agents one after another. Each agent can read the outputs of previous agents through session state. This pattern is ideal for multi-step processing where each step builds on the previous one.
import { GraphCompose } from '@graph-compose/client'
import {
createLlmAgent,
createSequentialAgent,
createSubAgentReference,
} from '@graph-compose/client/adk-helpers'
const graph = new GraphCompose({ token: 'your-token' })
graph
.adk('claims_processor')
.withWorkflow(builder =>
builder
.rootAgent('pipeline')
.agent(
createLlmAgent({
id: 'document_analyzer',
httpConfig: { url: 'https://llm.example.com/chat', method: 'POST' },
instructions: 'Extract key information from the claim document.',
outputKey: 'document_analysis',
}),
)
.agent(
createLlmAgent({
id: 'fraud_detector',
httpConfig: { url: 'https://llm.example.com/chat', method: 'POST' },
instructions: 'Analyze the document_analysis in session state for fraud indicators.',
outputKey: 'fraud_analysis',
}),
)
.agent(
createLlmAgent({
id: 'decision_agent',
httpConfig: { url: 'https://llm.example.com/chat', method: 'POST' },
instructions: 'Make a claim decision based on document_analysis and fraud_analysis.',
outputKey: 'decision',
}),
)
.agent(
createSequentialAgent({
id: 'pipeline',
subAgents: [
createSubAgentReference('document_analyzer'),
createSubAgentReference('fraud_detector'),
createSubAgentReference('decision_agent'),
],
outputKey: 'final_result',
}),
)
.build(),
)
.withInitialPrompt('Process claim CLM-456 for auto accident')
.end()
Each agent writes its output to session state via outputKey. The next agent in the sequence receives the accumulated state in its state field.
Parallel execution
A ParallelAgent runs sub-agents concurrently. Each agent operates in isolation with its own conversation history. Use this when agents can work independently and you want to gather results from multiple sources at once.
import {
createLlmAgent,
createParallelAgent,
createSequentialAgent,
createSubAgentReference,
} from '@graph-compose/client/adk-helpers'
// Three research agents run concurrently
const flightAgent = createLlmAgent({
id: 'flight_agent',
httpConfig: { url: 'https://llm.example.com/chat', method: 'POST' },
instructions: 'Search for flights.',
tools: ['search_flights'],
outputKey: 'flight_data',
})
const hotelAgent = createLlmAgent({
id: 'hotel_agent',
httpConfig: { url: 'https://llm.example.com/chat', method: 'POST' },
instructions: 'Search for hotels.',
tools: ['search_hotels'],
outputKey: 'hotel_data',
})
const activityAgent = createLlmAgent({
id: 'activity_agent',
httpConfig: { url: 'https://llm.example.com/chat', method: 'POST' },
instructions: 'Search for activities.',
tools: ['search_activities'],
outputKey: 'activity_data',
})
// Synthesizer combines all parallel results
const synthesizer = createLlmAgent({
id: 'synthesizer',
httpConfig: { url: 'https://llm.example.com/chat', method: 'POST' },
instructions: 'Combine flight_data, hotel_data, and activity_data from session state into a travel itinerary.',
outputKey: 'itinerary',
})
// Orchestration: parallel research, then synthesis
const parallel = createParallelAgent({
id: 'parallel_research',
subAgents: [
createSubAgentReference('flight_agent'),
createSubAgentReference('hotel_agent'),
createSubAgentReference('activity_agent'),
],
})
const orchestrator = createSequentialAgent({
id: 'travel_planner',
subAgents: [
createSubAgentReference('parallel_research'),
createSubAgentReference('synthesizer'),
],
})
Parallel agents maintain isolated conversation histories. FlightAgent sees only its own tool calls and responses, not HotelAgent's. Use outputKey on each parallel agent so the synthesizer can read all results from session state.
Loop iteration
A LoopAgent repeats its sub-agents until a termination condition is met. This is useful for iterative refinement, quality checks, or polling workflows.
import {
createLlmAgent,
createLoopAgent,
createSubAgentReference,
} from '@graph-compose/client/adk-helpers'
const writer = createLlmAgent({
id: 'writer',
httpConfig: { url: 'https://llm.example.com/chat', method: 'POST' },
instructions: 'Write or revise content based on reviewer feedback in session state.',
outputKey: 'draft',
})
const reviewer = createLlmAgent({
id: 'reviewer',
httpConfig: { url: 'https://llm.example.com/chat', method: 'POST' },
instructions: 'Review the draft. Set quality_status to "approved" if ready, or provide feedback.',
outputKey: 'review_feedback',
})
const loop = createLoopAgent({
id: 'quality_loop',
subAgents: [
createSubAgentReference('writer'),
createSubAgentReference('reviewer'),
],
maxAgentLoopIterations: 5,
loopExitCondition: "session_state.get('quality_status') == 'approved'",
outputKey: 'final_content',
})
The loop tracks its progress in session state with current_agent_loop_iteration (0-based). When the loop ends, loop_exit_reason indicates why: "max_agent_loop_iterations", "escalate", or "exit_condition".
Reaching max iterations is a normal completion. The parent agent continues. An escalate signal from a sub-agent is different: it stops the loop and propagates to the parent, causing the parent to terminate as well.
Agent handoff
The agent handoff pattern uses an LlmAgent's subAgents field to define nested specialist agents. The parent agent dynamically routes conversations to the appropriate specialist using the built-in transfer_to_agent tool.
import { createLlmAgent } from '@graph-compose/client/adk-helpers'
const router = createLlmAgent({
id: 'RouterAgent',
httpConfig: { url: 'https://llm.example.com/router', method: 'POST' },
instructions: 'Analyze user intent and transfer to the appropriate specialist.',
subAgents: [
createLlmAgent({
id: 'GreetingAgent',
httpConfig: { url: 'https://llm.example.com/greeting', method: 'POST' },
instructions: 'Handle greetings warmly and professionally.',
}),
createLlmAgent({
id: 'BillingAgent',
httpConfig: { url: 'https://llm.example.com/billing', method: 'POST' },
instructions: 'Handle billing inquiries.',
tools: ['lookup_invoice'],
}),
createLlmAgent({
id: 'SupportAgent',
httpConfig: { url: 'https://llm.example.com/support', method: 'POST' },
instructions: 'Provide technical support and troubleshooting.',
tools: ['search_kb'],
}),
],
})
Key difference from orchestrators: with agent handoff, the subAgents are inline agent configurations (full objects), not references by ID. The router agent decides at runtime which specialist to invoke, based on the conversation.
Coordinator-dispatcher
The coordinator-dispatcher pattern uses AgentTools to let a coordinator agent delegate tasks to specialist agents. Unlike handoff (which transfers control), the coordinator remains in control and can call multiple specialists in sequence.
import {
createLlmAgent,
createAgentTool,
} from '@graph-compose/client/adk-helpers'
graph
.adk('help_desk')
.withWorkflow(builder =>
builder
.rootAgent('coordinator')
.agentTool(
createAgentTool({
id: 'billing_tool',
targetAgentId: 'BillingSpecialist',
outputKey: 'billing_result',
}),
)
.agentTool(
createAgentTool({
id: 'support_tool',
targetAgentId: 'SupportSpecialist',
outputKey: 'support_result',
}),
)
.agent(
createLlmAgent({
id: 'coordinator',
httpConfig: { url: 'https://llm.example.com/chat', method: 'POST' },
instructions: 'Route requests to the appropriate specialist tool.',
tools: ['billing_tool', 'support_tool'],
}),
)
.agent(
createLlmAgent({
id: 'BillingSpecialist',
httpConfig: { url: 'https://llm.example.com/billing', method: 'POST' },
instructions: 'Handle billing inquiries.',
tools: ['lookup_invoice'],
}),
)
.agent(
createLlmAgent({
id: 'SupportSpecialist',
httpConfig: { url: 'https://llm.example.com/support', method: 'POST' },
instructions: 'Provide technical support.',
tools: ['search_kb'],
}),
)
.build(),
)
.withInitialPrompt('I need help with my invoice')
.end()
The coordinator calls specialists as tools. The specialist's final response is returned to the coordinator as a function_response, and the coordinator can decide what to do next: call another specialist, respond to the user, or end the workflow.
Nested composition
Orchestrator agents can be nested to create complex workflows. A SequentialAgent can contain a ParallelAgent as one of its steps. A LoopAgent can contain a SequentialAgent. The nesting depth is unlimited.
This example processes insurance claims with a sequential pipeline that includes a parallel analysis step:
import {
createLlmAgent,
createSequentialAgent,
createParallelAgent,
createHttpTool,
createSubAgentReference,
} from '@graph-compose/client/adk-helpers'
graph
.adk('insurance_claims')
.withWorkflow(builder =>
builder
.rootAgent('ClaimsOrchestrator')
.httpTool(createHttpTool({
id: 'fetch_policy',
httpConfig: { url: 'https://insurance-api.example.com/policies', method: 'GET' },
outputKey: 'policy_data',
}))
.httpTool(createHttpTool({
id: 'fetch_claim_history',
httpConfig: { url: 'https://insurance-api.example.com/claims/history', method: 'POST' },
outputKey: 'claim_history',
}))
.agent(createLlmAgent({
id: 'DocumentAnalyzer',
httpConfig: { url: 'https://llm.example.com/chat', method: 'POST' },
instructions: 'Extract key claim information from the document.',
outputKey: 'document_analysis',
}))
.agent(createLlmAgent({
id: 'FraudDetector',
httpConfig: { url: 'https://llm.example.com/chat', method: 'POST' },
instructions: 'Check for fraud indicators using claim history.',
tools: ['fetch_claim_history'],
outputKey: 'fraud_analysis',
}))
.agent(createLlmAgent({
id: 'PolicyValidator',
httpConfig: { url: 'https://llm.example.com/chat', method: 'POST' },
instructions: 'Validate claim against policy coverage.',
tools: ['fetch_policy'],
outputKey: 'policy_validation',
}))
.agent(createParallelAgent({
id: 'ParallelAnalysis',
subAgents: [
createSubAgentReference('FraudDetector'),
createSubAgentReference('PolicyValidator'),
],
}))
.agent(createLlmAgent({
id: 'DecisionAgent',
httpConfig: { url: 'https://llm.example.com/chat', method: 'POST' },
instructions: 'Make claim decision using document_analysis, fraud_analysis, and policy_validation.',
outputKey: 'decision',
}))
.agent(createSequentialAgent({
id: 'ClaimsOrchestrator',
subAgents: [
createSubAgentReference('DocumentAnalyzer'),
createSubAgentReference('ParallelAnalysis'),
createSubAgentReference('DecisionAgent'),
],
outputKey: 'final_decision',
}))
.build(),
)
.withState({ policy_id: 'POL-123456', customer_id: 'CUST-789' })
.withMaxCycles(30)
.withInitialPrompt('Process insurance claim for policy POL-123456')
.end()
The execution flow:
- DocumentAnalyzer runs first, saves analysis to session state.
- ParallelAnalysis runs FraudDetector and PolicyValidator concurrently. Both can access
document_analysisfrom session state. - DecisionAgent runs last with all three outputs available in session state.
Best practices
- Start simple. Use a single LlmAgent before adding orchestrators. Add complexity only when needed.
- Use
outputKeyon every agent whose output matters to downstream agents. - Follow ParallelAgent with a synthesizer to combine results into a coherent response.
- Set
maxAgentLoopIterationson every LoopAgent to prevent infinite loops. - Prefer coordinator-dispatcher over agent handoff when the coordinator needs to maintain context across multiple specialist calls.
- Use agent handoff when the specialist takes full control of the conversation and does not return to the router.