ADK Agents
ADK nodes let you run multi-agent AI workflows with tool calling, Human-in-the-Loop confirmation, and multi-turn conversations, all orchestrated by Temporal. You bring your own LLM service and tool endpoints. Graph Compose handles the orchestration.
How ADK agents work
An ADK node is a workflow node with type: "adk". It contains a complete agent workflow definition: a list of agents, a list of tools, and a root agent that serves as the entry point.
When you execute a workflow containing an ADK node, Graph Compose starts a Temporal child workflow that the ADK worker picks up. The ADK worker calls your LLM service over HTTP, executes any tools the LLM requests by calling your tool endpoints, feeds the results back, and repeats until the LLM signals completion. You can monitor the workflow's state and send signals while it runs.
The orchestration loop for a single agent turn:
- The ADK worker sends an HTTP request to your LLM service with the conversation history, available tools, and session state.
- Your LLM service returns a response: text content, tool calls, or a completion signal.
- If the response includes
toolCalls, the ADK worker calls each tool's HTTP endpoint and adds the results to the conversation. - The worker sends the updated conversation back to your LLM service.
- This repeats until your service returns
exitFlow: true, signaling that the task is done. If the response includes ahitlRequest, the workflow pauses for human approval instead.
What you bring
Graph Compose provides the orchestration layer. You provide and host two types of HTTP endpoint, each with its own request/response contract:
Your LLM service. Configured via httpConfig on each LlmAgent. The ADK worker sends conversation history, available tools, and session state. Your service returns text responses, tool call requests, or control signals. Use any model (OpenAI, Anthropic, open-source) and any framework.
Your tool endpoints. Configured via httpConfig on each HTTP tool in the globalTools array. When your LLM service returns toolCalls, the ADK worker calls your tool endpoint with the tool name, the LLM's arguments, and a correlation ID. Your endpoint returns a standard JSON response, which is fed back to the agent as a function_response.
Both contracts are documented in detail on Your HTTP Endpoints, including request/response shapes, headers, and examples.
The LLM service contract
Your LLM service receives messages (conversation history), tools (available functions), and state (session state). It returns a JSON object with some combination of content (text), toolCalls (tool requests), and control signals like exitFlow and hitlRequest.
| Field | Type | Description |
|---|---|---|
content | string | null | Text response. Use null for silent agents that only make tool calls. |
toolCalls | array | null | Tool calls to execute. Each has function_name and function_args. |
exitFlow | boolean | Set to true to end the agent's task. |
escalate | boolean | Set to true to stop the immediate parent container (loop, sequence). |
hitlRequest | object | null | Pause and request Human-in-the-Loop confirmation before proceeding. |
See Your HTTP Endpoints: LLM service contract for full request/response examples.
What you can build
ADK workflows support four agent types that you compose into multi-agent systems:
| Type | Purpose | Details |
|---|---|---|
| LlmAgent | Calls your LLM service. Can use tools. The leaf node of any agent tree. | Agent Types |
| SequentialAgent | Executes sub-agents one after another, passing data through session state. | Agent Types |
| ParallelAgent | Executes sub-agents concurrently, each with isolated conversation history. | Agent Types |
| LoopAgent | Repeats sub-agents until a condition is met or max iterations reached. | Agent Types |
These compose into patterns like:
- Sequential pipelines. Chain agents where each one builds on the previous agent's output. Multi-Agent Orchestration
- Parallel fan-out. Run multiple agents concurrently, then synthesize results. Multi-Agent Orchestration
- Iterative refinement. Loop a writer and reviewer agent until quality criteria are met. Multi-Agent Orchestration
- Dynamic routing. A router agent hands off to specialist agents based on the user's request. Multi-Agent Orchestration
- Human-in-the-Loop. Pause for human approval before executing high-stakes actions. Monitoring and Signals
Agents communicate through tools (HTTP endpoints or delegation to other agents) and session state (data passed between agents via outputKey). Once running, you can query workflow state and send signals to monitor progress or respond to confirmation requests.
Basic example
This example defines a single LLM agent with one HTTP tool. The agent answers weather questions by calling the get_weather tool endpoint.
import { GraphCompose } from '@graph-compose/client'
import {
createLlmAgent,
createHttpTool,
} from '@graph-compose/client/adk-helpers'
const graph = new GraphCompose({ token: 'your-token' })
graph
.adk('weather_assistant')
.withWorkflow(builder =>
builder
.rootAgent('weather_agent')
.agent(
createLlmAgent({
id: 'weather_agent',
httpConfig: {
url: 'https://llm.example.com/chat',
method: 'POST',
},
instructions: 'You answer weather questions using the get_weather tool.',
tools: ['get_weather'],
outputKey: 'assistant_response',
}),
)
.httpTool(
createHttpTool({
id: 'get_weather',
httpConfig: {
url: 'https://api.example.com/weather',
method: 'GET',
},
}),
)
.build(),
)
.withInitialPrompt('What is the weather in San Francisco?')
.end()
const result = await graph.execute()
In this example, https://llm.example.com/chat is your LLM service and https://api.example.com/weather is your tool endpoint. You own and host both. For workflows with multiple agents working together, see Multi-Agent Orchestration.