Agent Types
ADK workflows use four agent types. LlmAgent calls your LLM service. SequentialAgent, ParallelAgent, and LoopAgent are orchestrators that coordinate multiple LlmAgents into structured flows.
How agent execution works
Every ADK workflow follows the same cycle. The ADK worker sends a POST request to your LLM service with the conversation so far, the tools available, and the current session state. Your service processes the request and tells the worker what to do next.
What your LLM service receives
{
"messages": [
{ "role": "system", "content": "Your agent instructions..." },
{ "role": "user", "content": "The user's message or initial prompt" }
],
"tools": [
{
"type": "function",
"function": {
"name": "tool_id",
"description": "tool_id",
"parameters": { "type": "object", "properties": {}, "required": [] }
}
}
],
"state": {}
}
Your service returns a response telling the worker what to do:
What your LLM service returns
{
"content": "Text response to the user (or null for silent tool-only turns)",
"toolCalls": [{ "function_name": "tool_id", "function_args": {} }],
"exitFlow": true
}
| Field | Type | Purpose |
|---|---|---|
content | string | null | Text response. Set to null when the agent only needs to call tools. |
toolCalls | array | null | Tool calls for the worker to execute. Each has function_name and function_args. |
exitFlow | boolean | Set to true when the agent's task is complete. |
escalate | boolean | Set to true to stop the parent container (loop or sequence). |
hitlRequest | object | null | Pause for Human-in-the-Loop approval. Contains prompt and actionDetails. |
Your service only needs to return the fields relevant to each response. A tool call response can omit content and exitFlow. A completion response can omit toolCalls. See Your HTTP Endpoints for the full specification.
The four agent types differ in how they orchestrate this cycle:
| Type | Calls your LLM? | Purpose |
|---|---|---|
| LlmAgent | Yes | The building block. Calls your LLM service, executes tools, manages conversation. |
| SequentialAgent | No | Runs sub-agents in order. Each agent builds on previous outputs through session state. |
| ParallelAgent | No | Runs sub-agents concurrently with isolated conversation histories. |
| LoopAgent | No | Repeats sub-agents until a condition is met or max iterations reached. |
LlmAgent
An LlmAgent is the only agent type that calls your LLM service. It sends the request described above to your httpConfig.url, processes the response (executing any tool calls), and repeats until your service returns exitFlow: true. Every ADK workflow has at least one.
If the agent has an outputKey, its final content value is saved to session_state[outputKey] when it finishes. This is how other agents access its output. Without outputKey, the response exists in the conversation history but is not available to other agents through session state.
import { GraphCompose } from '@graph-compose/client'
import {
createLlmAgent,
createHttpTool,
} from '@graph-compose/client/adk-helpers'
const graph = new GraphCompose({ token: 'your-token' })
graph
.adk('support_workflow')
.withWorkflow(builder =>
builder
.rootAgent('support_agent')
.agent(
createLlmAgent({
id: 'support_agent',
httpConfig: {
url: 'https://llm.example.com/chat',
method: 'POST',
},
instructions:
'You are a customer support agent. Use tools to look up account information.',
tools: ['search_kb', 'lookup_account'],
outputKey: 'support_response',
activityConfig: {
startToCloseTimeout: '60s',
retryPolicy: { maximumAttempts: 3 },
},
}),
)
.httpTool(
createHttpTool({
id: 'search_kb',
httpConfig: {
url: 'https://api.example.com/kb/search',
method: 'POST',
},
}),
)
.httpTool(
createHttpTool({
id: 'lookup_account',
httpConfig: {
url: 'https://api.example.com/accounts',
method: 'GET',
},
}),
)
.build(),
)
.withInitialPrompt('I need help with my recent order.')
.end()
const result = await graph.execute()
When this workflow starts, the ADK worker sends the following to https://llm.example.com/chat:
First request to your LLM service
{
"messages": [
{ "role": "system", "content": "You are a customer support agent. Use tools to look up account information." },
{ "role": "user", "content": "I need help with my recent order." }
],
"tools": [
{
"type": "function",
"function": {
"name": "search_kb",
"description": "search_kb",
"parameters": { "type": "object", "properties": {}, "required": [] }
}
},
{
"type": "function",
"function": {
"name": "lookup_account",
"description": "lookup_account",
"parameters": { "type": "object", "properties": {}, "required": [] }
}
}
],
"state": {}
}
Your service processes this and returns a response. If it returns toolCalls, the worker executes each tool and sends a follow-up request with the results appended to messages. If it returns content with exitFlow: true, the agent is done and the content is saved to session_state["support_response"].
Configuration reference
| Field | Type | Required | Description |
|---|---|---|---|
id | string | Yes | Unique identifier for this agent. |
httpConfig | object | Yes | HTTP configuration (url, method) for calling your LLM service. |
instructions | string | No | System prompt. Sent as the first message with role: "system". |
tools | string[] | No | Tool IDs this agent can use. Each must match a tool defined in globalTools. |
outputKey | string | No | Saves the agent's final content to session_state[outputKey]. Without this, the response is not available to other agents through session state. |
activityConfig | object | No | Temporal activity settings: startToCloseTimeout, retryPolicy. |
subAgents | AgentConfig[] | No | Nested agent configs for the agent handoff pattern. |
The httpConfig on an LlmAgent points to your LLM service. Tool endpoints are configured separately in globalTools. Each uses a different request/response contract. See Your HTTP Endpoints for both contracts.
Agent handoff (subAgents)
LlmAgent supports a subAgents field that enables dynamic routing. Unlike orchestrator agents (which reference sub-agents by ID), an LlmAgent's subAgents are full inline agent configurations. The parent agent uses the built-in transfer_to_agent tool to hand off control to a child based on the conversation context.
This is covered in detail in Multi-Agent Orchestration: Agent Handoff.
Orchestrator agents
SequentialAgent, ParallelAgent, and LoopAgent are coordination primitives. They have no httpConfig, make no HTTP calls, and do not communicate with any external service. They exist purely to control when their sub-agents run and what session state each sub-agent sees. All external communication happens through the LlmAgents they orchestrate.
SequentialAgent
A SequentialAgent runs sub-agents one after another in the order you define them. It has no httpConfig and makes no HTTP calls itself. Its job is to execute each sub-agent in order, passing accumulated session state forward. This makes it the natural choice for pipelines where each step builds on previous results.
import { GraphCompose } from '@graph-compose/client'
import {
createLlmAgent,
createSequentialAgent,
createSubAgentReference,
createHttpTool,
} from '@graph-compose/client/adk-helpers'
const graph = new GraphCompose({ token: 'your-token' })
graph
.adk('claims_processor')
.withWorkflow(builder =>
builder
.rootAgent('claims_pipeline')
.agent(
createSequentialAgent({
id: 'claims_pipeline',
subAgents: [
createSubAgentReference('document_analyzer'),
createSubAgentReference('fraud_detector'),
createSubAgentReference('decision_agent'),
],
outputKey: 'pipeline_result',
}),
)
.agent(
createLlmAgent({
id: 'document_analyzer',
httpConfig: {
url: 'https://llm.example.com/chat',
method: 'POST',
},
instructions:
'Analyze the submitted claim documents and extract key details.',
tools: ['read_document'],
outputKey: 'document_analysis',
}),
)
.agent(
createLlmAgent({
id: 'fraud_detector',
httpConfig: {
url: 'https://llm.example.com/chat',
method: 'POST',
},
instructions:
'Review the document analysis in session state for fraud indicators.',
outputKey: 'fraud_analysis',
}),
)
.agent(
createLlmAgent({
id: 'decision_agent',
httpConfig: {
url: 'https://llm.example.com/chat',
method: 'POST',
},
instructions:
'Make a claim decision based on the document and fraud analysis.',
outputKey: 'claim_decision',
}),
)
.httpTool(
createHttpTool({
id: 'read_document',
httpConfig: {
url: 'https://api.example.com/documents',
method: 'GET',
},
}),
)
.build(),
)
.withInitialPrompt('Process insurance claim CLM-2024-001.')
.end()
const result = await graph.execute()
How state accumulates
The key to understanding sequential agents is the state field. Each sub-agent is an LlmAgent that receives session state containing the outputKey values from all agents that completed before it.
When decision_agent (the third agent) runs, it receives:
What decision_agent receives (3rd in sequence)
{
"messages": [
{ "role": "system", "content": "Make a claim decision based on the document and fraud analysis." },
{ "role": "user", "content": "Process insurance claim CLM-2024-001." }
],
"tools": [],
"state": {
"document_analysis": "Claim CLM-2024-001: Water damage to kitchen ceiling. Submitted photos show...",
"fraud_analysis": "No fraud indicators detected. Claim details are consistent with..."
}
}
document_analysis and fraud_analysis are present because those agents completed first and had outputKey configured. This is how agents in a sequence share data: each agent reads previous outputs from state and writes its own output via outputKey.
If any sub-agent returns exitFlow: true, the sequence stops and remaining agents are skipped.
Configuration reference
| Field | Type | Required | Description |
|---|---|---|---|
id | string | Yes | Unique identifier. |
subAgents | { agentId: string }[] | Yes | References to agents defined at the workflow level. Execution order matches array order. |
outputKey | string | No | Saves the final sub-agent's text response to session state. |
ParallelAgent
A ParallelAgent runs all sub-agents concurrently. Like SequentialAgent, it has no httpConfig and makes no HTTP calls. It launches each sub-agent with the same session state snapshot and its own isolated conversation history. Agents in a parallel block cannot see each other's messages, tool calls, or intermediate results during execution.
This is the right choice when you have independent tasks that can run at the same time, like searching multiple data sources or generating content in different formats.
import { GraphCompose } from '@graph-compose/client'
import {
createLlmAgent,
createParallelAgent,
createSubAgentReference,
createHttpTool,
} from '@graph-compose/client/adk-helpers'
const graph = new GraphCompose({ token: 'your-token' })
graph
.adk('travel_planner')
.withWorkflow(builder =>
builder
.rootAgent('parallel_research')
.agent(
createParallelAgent({
id: 'parallel_research',
subAgents: [
createSubAgentReference('flight_agent'),
createSubAgentReference('hotel_agent'),
createSubAgentReference('activity_agent'),
],
}),
)
.agent(
createLlmAgent({
id: 'flight_agent',
httpConfig: {
url: 'https://llm.example.com/chat',
method: 'POST',
},
instructions: 'Search for flights matching the travel request.',
tools: ['search_flights'],
outputKey: 'flight_data',
}),
)
.agent(
createLlmAgent({
id: 'hotel_agent',
httpConfig: {
url: 'https://llm.example.com/chat',
method: 'POST',
},
instructions: 'Search for hotels at the destination.',
tools: ['search_hotels'],
outputKey: 'hotel_data',
}),
)
.agent(
createLlmAgent({
id: 'activity_agent',
httpConfig: {
url: 'https://llm.example.com/chat',
method: 'POST',
},
instructions: 'Find activities and attractions at the destination.',
tools: ['search_activities'],
outputKey: 'activity_data',
}),
)
.httpTool(
createHttpTool({
id: 'search_flights',
httpConfig: {
url: 'https://api.example.com/flights',
method: 'GET',
},
}),
)
.httpTool(
createHttpTool({
id: 'search_hotels',
httpConfig: {
url: 'https://api.example.com/hotels',
method: 'GET',
},
}),
)
.httpTool(
createHttpTool({
id: 'search_activities',
httpConfig: {
url: 'https://api.example.com/activities',
method: 'GET',
},
}),
)
.build(),
)
.withInitialPrompt('Plan a trip to Tokyo for next week.')
.end()
const result = await graph.execute()
Isolation and results
Each parallel sub-agent receives the same initial state and the same user message, but with its own tools and instructions. Here is what flight_agent sees:
What flight_agent receives (one of three parallel agents)
{
"messages": [
{ "role": "system", "content": "Search for flights matching the travel request." },
{ "role": "user", "content": "Plan a trip to Tokyo for next week." }
],
"tools": [
{
"type": "function",
"function": {
"name": "search_flights",
"description": "search_flights",
"parameters": { "type": "object", "properties": {}, "required": [] }
}
}
],
"state": {}
}
hotel_agent and activity_agent receive the same state and user message, but with their own tools and instructions. Each agent's conversation evolves independently from this starting point.
After all three agents complete, session state contains each agent's output:
Session state after parallel execution
{
"flight_data": "Found 3 direct flights to Tokyo NRT...",
"hotel_data": "Top-rated hotels in Shinjuku district...",
"activity_data": "Recommended: teamLab Borderless, Tsukiji Market tour..."
}
A common pattern is to follow a ParallelAgent with a synthesizer LlmAgent that reads all parallel outputs from session state and combines them into a cohesive result. See Multi-Agent Orchestration: Parallel Execution for a complete example.
ParallelAgent does not support outputKey on itself. Each sub-agent should define its own outputKey to save results to session state.
Configuration reference
| Field | Type | Required | Description |
|---|---|---|---|
id | string | Yes | Unique identifier. |
subAgents | { agentId: string }[] | Yes | References to agents that execute concurrently. |
LoopAgent
A LoopAgent repeats its sub-agents until a termination condition is met. Like the other orchestrators, it has no httpConfig and makes no HTTP calls. On each iteration, sub-agents receive the full session state from previous iterations, enabling patterns like iterative refinement where a writer and reviewer collaborate until quality criteria are satisfied.
import { GraphCompose } from '@graph-compose/client'
import {
createLlmAgent,
createLoopAgent,
createSubAgentReference,
} from '@graph-compose/client/adk-helpers'
const graph = new GraphCompose({ token: 'your-token' })
graph
.adk('content_pipeline')
.withWorkflow(builder =>
builder
.rootAgent('quality_loop')
.agent(
createLoopAgent({
id: 'quality_loop',
subAgents: [
createSubAgentReference('writer_agent'),
createSubAgentReference('reviewer_agent'),
],
maxAgentLoopIterations: 5,
loopExitCondition:
"session_state.get('quality_status') == 'approved'",
outputKey: 'final_content',
}),
)
.agent(
createLlmAgent({
id: 'writer_agent',
httpConfig: {
url: 'https://llm.example.com/chat',
method: 'POST',
},
instructions:
'Write or revise content based on reviewer feedback in session state.',
outputKey: 'draft_content',
}),
)
.agent(
createLlmAgent({
id: 'reviewer_agent',
httpConfig: {
url: 'https://llm.example.com/chat',
method: 'POST',
},
instructions:
'Review the draft in session state. Set quality_status to approved if it meets standards, or provide feedback for revision.',
outputKey: 'quality_status',
}),
)
.build(),
)
.withInitialPrompt('Write a blog post about AI orchestration.')
.end()
const result = await graph.execute()
How iteration state evolves
On each iteration, sub-agents receive the accumulated session state from all previous iterations. Here is what writer_agent sees on iteration 3, after two rounds of writing and reviewing:
What writer_agent receives (iteration 3)
{
"messages": [
{ "role": "system", "content": "Write or revise content based on reviewer feedback in session state." },
{ "role": "user", "content": "Write a blog post about AI orchestration." }
],
"tools": [],
"state": {
"draft_content": "AI orchestration platforms coordinate multiple...",
"quality_status": "Needs revision: the section on error handling lacks detail.",
"current_agent_loop_iteration": 2
}
}
The writer reads the reviewer's feedback from quality_status in state and revises accordingly. The reviewer then evaluates the new draft. This cycle continues until the reviewer sets quality_status to "approved", which satisfies the loopExitCondition.
Termination conditions
The loop stops when any of these conditions is met:
- Exit condition satisfied. The
loopExitConditionPython expression evaluates toTrueagainst the current session state. - Max iterations reached. The loop has run
maxAgentLoopIterationstimes. The parent agent continues normally. - Escalate signal. A sub-agent returns
escalate: true, which stops the loop AND propagates to the parent container. This is different from max-iterations completion, where the parent continues.
The ADK worker tracks two state keys during loop execution:
current_agent_loop_iteration: The current iteration number (0-based).loop_exit_reason: Set when the loop ends. One of"exit_condition","max_agent_loop_iterations", or"escalate".
Configuration reference
| Field | Type | Required | Description |
|---|---|---|---|
id | string | Yes | Unique identifier. |
subAgents | { agentId: string }[] | Yes | Agents executed sequentially within each iteration. |
maxAgentLoopIterations | number | Yes | Maximum iterations. Must be at least 1. |
loopExitCondition | string | No | Python expression evaluated against session state. Loop terminates when this returns True. |
outputKey | string | No | Saves the final response to session state. |
Session state and outputKey
Session state is how agents share data. It is a key-value object that persists across the entire ADK workflow and is passed to every LlmAgent via the state field in the HTTP request.
How outputKey works
When an agent has outputKey configured, its final content value is automatically saved to session_state[outputKey] after it completes. Only the text content is stored. Tool calls, function responses, and intermediate reasoning are not saved.
Session state after two agents complete
{
"document_analysis": "Claim CLM-2024-001: Water damage to kitchen...",
"fraud_analysis": "No fraud indicators detected.",
"_user_message_count": 1,
"orchestration_cycle_count": 3
}
Without outputKey
If an agent does not have outputKey, its final response is still part of its own conversation history, but it is not written to session state. This means other agents cannot access that response through the state field. Use this for agents whose output is self-contained and does not need to be consumed by downstream agents.
System-managed state keys
The ADK worker automatically maintains several state keys:
| Key | Type | Description |
|---|---|---|
_user_message_count | number | Number of user messages received. |
orchestration_cycle_count | number | Number of orchestration cycles completed. |
current_agent_loop_iteration | number | Current loop iteration (0-based). Only present inside a LoopAgent. |
loop_exit_reason | string | Why the loop terminated. Only set after a LoopAgent completes. |
Use descriptive outputKey names like "research_findings" or "fraud_analysis". Avoid generic names like "data" or "result" that become ambiguous in multi-agent workflows.
Workflow-level settings
These settings apply to the entire ADK workflow, not individual agents.
| Setting | SDK Method | REST Field | Description |
|---|---|---|---|
| Root agent | .rootAgent('id') | rootAgentId | The agent that serves as the entry point. Required. |
| Max orchestration cycles | .withMaxCycles(n) | maxOrchestrationCycles | Safety limit for the overall workflow. Counts workflow-level turns, not individual agent executions within a turn. |
| Initial state | .withState({...}) | state | Seed data accessible to all agents via session state from the start. |
| Initial prompt | .withInitialPrompt('...') | initialUserInput | The first user message that starts the workflow. |
graph
.adk('claims_processor')
.withWorkflow(builder =>
builder
.rootAgent('orchestrator')
.agent(/* agents */)
.httpTool(/* tools */)
.withMaxCycles(30)
.build(),
)
.withState({
policy_id: 'POL-123456',
customer_id: 'CUST-789',
})
.withInitialPrompt('Process insurance claim for policy POL-123456')
.end()
When you provide initial state via .withState(), those values are available to the first agent from the start. The agent receives them in the state field of its first request:
First agent sees seed data in state
{
"messages": [...],
"tools": [...],
"state": {
"policy_id": "POL-123456",
"customer_id": "CUST-789"
}
}