ADK Agents

ADK nodes let you run multi-agent AI workflows with tool calling, Human-in-the-Loop confirmation, and multi-turn conversations, all orchestrated by Temporal. You bring your own LLM service and tool endpoints. Graph Compose handles the orchestration.

How ADK agents work

An ADK node is a workflow node with type: "adk". It contains a complete agent workflow definition: a list of agents, a list of tools, and a root agent that serves as the entry point.

When you execute a workflow containing an ADK node, Graph Compose starts a Temporal child workflow that the ADK worker picks up. The ADK worker calls your LLM service over HTTP, executes any tools the LLM requests by calling your tool endpoints, feeds the results back, and repeats until the LLM signals completion. You can monitor the workflow's state and send signals while it runs.

flowchart LR A[Graph Compose API] --> B[Temporal] B --> C[ADK Worker] C -->|HTTP request| D[Your LLM Service] D -->|tool_calls| C C -->|executes| E[Your Tool Endpoints] E -->|result| C C -->|function_response| D D -->|exitFlow: true| C C --> F[Result] style A fill:#4F46E5,stroke:#4338CA,color:#fff,rx:10 style B fill:#4F46E5,stroke:#4338CA,color:#fff,rx:10 style C fill:#8B5CF6,stroke:#7C3AED,color:#fff,rx:10 style D fill:#10B981,stroke:#059669,color:#fff,rx:10 style E fill:#10B981,stroke:#059669,color:#fff,rx:10 style F fill:#4F46E5,stroke:#4338CA,color:#fff,rx:10

The orchestration loop for a single agent turn:

  1. The ADK worker sends an HTTP request to your LLM service with the conversation history, available tools, and session state.
  2. Your LLM service returns a response: text content, tool calls, or a completion signal.
  3. If the response includes toolCalls, the ADK worker calls each tool's HTTP endpoint and adds the results to the conversation.
  4. The worker sends the updated conversation back to your LLM service.
  5. This repeats until your service returns exitFlow: true, signaling that the task is done. If the response includes a hitlRequest, the workflow pauses for human approval instead.

What you bring

Graph Compose provides the orchestration layer. You provide and host two types of HTTP endpoint, each with its own request/response contract:

Your LLM service. Configured via httpConfig on each LlmAgent. The ADK worker sends conversation history, available tools, and session state. Your service returns text responses, tool call requests, or control signals. Use any model (OpenAI, Anthropic, open-source) and any framework.

Your tool endpoints. Configured via httpConfig on each HTTP tool in the globalTools array. When your LLM service returns toolCalls, the ADK worker calls your tool endpoint with the tool name, the LLM's arguments, and a correlation ID. Your endpoint returns a standard JSON response, which is fed back to the agent as a function_response.

Both contracts are documented in detail on Your HTTP Endpoints, including request/response shapes, headers, and examples.

The LLM service contract

Your LLM service receives messages (conversation history), tools (available functions), and state (session state). It returns a JSON object with some combination of content (text), toolCalls (tool requests), and control signals like exitFlow and hitlRequest.

FieldTypeDescription
contentstring | nullText response. Use null for silent agents that only make tool calls.
toolCallsarray | nullTool calls to execute. Each has function_name and function_args.
exitFlowbooleanSet to true to end the agent's task.
escalatebooleanSet to true to stop the immediate parent container (loop, sequence).
hitlRequestobject | nullPause and request Human-in-the-Loop confirmation before proceeding.

See Your HTTP Endpoints: LLM service contract for full request/response examples.

What you can build

ADK workflows support four agent types that you compose into multi-agent systems:

TypePurposeDetails
LlmAgentCalls your LLM service. Can use tools. The leaf node of any agent tree.Agent Types
SequentialAgentExecutes sub-agents one after another, passing data through session state.Agent Types
ParallelAgentExecutes sub-agents concurrently, each with isolated conversation history.Agent Types
LoopAgentRepeats sub-agents until a condition is met or max iterations reached.Agent Types

These compose into patterns like:

Agents communicate through tools (HTTP endpoints or delegation to other agents) and session state (data passed between agents via outputKey). Once running, you can query workflow state and send signals to monitor progress or respond to confirmation requests.

Basic example

This example defines a single LLM agent with one HTTP tool. The agent answers weather questions by calling the get_weather tool endpoint.

import { GraphCompose } from '@graph-compose/client'
import {
  createLlmAgent,
  createHttpTool,
} from '@graph-compose/client/adk-helpers'

const graph = new GraphCompose({ token: 'your-token' })

graph
  .adk('weather_assistant')
  .withWorkflow(builder =>
    builder
      .rootAgent('weather_agent')
      .agent(
        createLlmAgent({
          id: 'weather_agent',
          httpConfig: {
            url: 'https://llm.example.com/chat',
            method: 'POST',
          },
          instructions: 'You answer weather questions using the get_weather tool.',
          tools: ['get_weather'],
          outputKey: 'assistant_response',
        }),
      )
      .httpTool(
        createHttpTool({
          id: 'get_weather',
          httpConfig: {
            url: 'https://api.example.com/weather',
            method: 'GET',
          },
        }),
      )
      .build(),
  )
  .withInitialPrompt('What is the weather in San Francisco?')
  .end()

const result = await graph.execute()

In this example, https://llm.example.com/chat is your LLM service and https://api.example.com/weather is your tool endpoint. You own and host both. For workflows with multiple agents working together, see Multi-Agent Orchestration.

Next steps