Your HTTP Endpoints
ADK workflows call two types of HTTP endpoint that you own and host: your LLM service and your tool endpoints. Each has a different request/response contract. This page documents both.
Two endpoint types
| LLM service | Tool endpoint | |
|---|---|---|
| Configured on | httpConfig on each LlmAgent | httpConfig on each HTTP tool in globalTools |
| Called when | Every agent turn (the core orchestration loop) | The agent returns toolCalls requesting a specific tool |
| Receives | Conversation history, available tools, session state | Tool name, LLM's function arguments, correlation ID |
| Returns | Text, tool call requests, control signals (exitFlow, escalate, hitlRequest) | Any JSON (or plain text) that makes sense for the tool |
The ADK worker sits between your endpoints. It calls your LLM service, interprets the response, executes any requested tools by calling your tool endpoints, feeds the tool results back into the conversation, and repeats until your LLM service signals completion.
LLM service contract
Your LLM service is the core of every LlmAgent. The ADK worker sends it a POST request on every agent turn. Use any model (OpenAI, Anthropic, open-source) and any framework. The worker expects a specific request/response shape.
The request body is defined by AdkRequestBodySchema and the response by HttpResponseOutputSchema, both from @graph-compose/core. See Package types for all available schemas.
What your LLM service receives
The request body contains three fields:
Request body sent to your LLM service
{
"messages": [
{ "role": "system", "content": "You are a travel booking assistant." },
{ "role": "user", "content": "Find me a flight from SFO to JFK next Friday." }
],
"tools": [
{
"type": "function",
"function": {
"name": "search_flights",
"description": "Search for available flights",
"parameters": {
"type": "object",
"properties": {
"origin": { "type": "string" },
"destination": { "type": "string" },
"date": { "type": "string" }
},
"required": ["origin", "destination", "date"]
}
}
}
],
"state": {
"user_id": "user-123"
}
}
| Field | Type | Description |
|---|---|---|
messages | array | Conversation history in OpenAI chat format. Includes system (from instructions), user, assistant, and tool messages. Grows with each turn. |
tools | array | Tools available to this agent, in OpenAI function calling format. Each entry has type: "function" and a function object with name, description, and parameters. Only tools listed in the agent's tools field appear here. |
state | object | Current session state. Contains seed data from .withState(), outputKey values from completed agents, and system-managed keys like _user_message_count. |
What your LLM service returns
Your service returns a JSON object telling the worker what to do next. There are three common response patterns:
{
"toolCalls": [
{
"function_name": "search_flights",
"function_args": {
"origin": "SFO",
"destination": "JFK",
"date": "2026-03-20"
}
}
]
}
| Field | Type | Description |
|---|---|---|
content | string | null | Text response. Use null for silent agents that only make tool calls. If the agent has an outputKey, this value is saved to session state when the agent finishes. |
toolCalls | array | null | Tool calls for the worker to execute. Each has function_name (matching a tool ID) and function_args (passed to the tool endpoint). |
exitFlow | boolean | Set to true when the agent's task is complete. The worker stops the agent's orchestration loop. |
escalate | boolean | Set to true to stop the parent container (LoopAgent or SequentialAgent) as well. |
hitlRequest | object | null | Pause the workflow for Human-in-the-Loop approval. Contains prompt (display text) and actionDetails (structured metadata). |
Your service does not need to return all fields. Return only the ones relevant to each response. A tool call can omit content and exitFlow. A completion response can omit toolCalls.
Tool endpoint contract
Tool endpoints handle the work your agents delegate via toolCalls. Each HTTP tool in your workflow's globalTools array points to one of your endpoints. When your LLM service returns a tool call, the ADK worker calls the corresponding tool endpoint and feeds the result back into the conversation.
The tool call structure is defined by PendingToolCallSchema from @graph-compose/core. Tool definitions use GlobalHttpToolDefinitionSchema and GlobalAgentToolDefinitionSchema. See Package types for all available schemas.
What your tool endpoint receives
The ADK worker sends a POST request with a JSON body containing the tool name, the arguments from the agent's function_args, and a correlation ID:
Request body your tool endpoint receives
{
"tool_name": "search_flights",
"tool_args": {
"origin": "SFO",
"destination": "JFK",
"date": "2026-03-20"
},
"tool_call_id": "call_abc123"
}
| Field | Type | Description |
|---|---|---|
tool_name | string | The tool's ID, matching the id in your globalTools definition. |
tool_args | object | The arguments your LLM returned in function_args. Passed through as-is. |
tool_call_id | string | null | A correlation ID from the LLM's function call, used for tracking through the call/response cycle. |
The worker also sends context headers with every tool request:
| Header | Description |
|---|---|
X-Temporal-Workflow-ID | Parent workflow ID. |
X-Temporal-Activity-ID | Temporal activity ID for this execution. |
X-Temporal-Attempt | Retry attempt number (starts at 1). |
X-Tool-Name | The tool name being called. |
X-Tool-Call-ID | Same as tool_call_id in the body. |
Any custom headers you define in the tool's httpConfig.headers are also included. This is where you add authentication for your tool endpoint, such as Authorization: Bearer {{ $secret('tool_api_key') }}.
What your tool endpoint returns
There is no enforced schema for tool responses. Your tool endpoint returns a standard HTTP response with any JSON body. The ADK worker parses the response body as JSON (falling back to plain text if JSON parsing fails), wraps it internally using HttpToolActivityOutputSchema, and feeds the body back to the agent as a function_response.
Example tool response
{
"flights": [
{ "airline": "UA", "flight": "UA-201", "price": 300, "departure": "10:00 AM" },
{ "airline": "AA", "flight": "AA-1234", "price": 350, "departure": "2:00 PM" }
]
}
Return any JSON structure that makes sense for your tool. The agent receives the response wrapped in a function_response message in the conversation:
How the agent sees the tool result
{
"role": "tool",
"content": [
{
"function_response": {
"name": "search_flights",
"response": {
"output": {
"flights": [
{ "airline": "UA", "flight": "UA-201", "price": 300, "departure": "10:00 AM" },
{ "airline": "AA", "flight": "AA-1234", "price": 350, "departure": "2:00 PM" }
]
}
}
}
}
]
}
The output field contains whatever your endpoint returned. Only the response body is passed to the agent. HTTP status codes and headers are not included in the conversation.
HTTP status codes 400 and above are treated as errors. The ADK worker retries failed tool calls according to the tool's activityConfig.retryPolicy if one is configured.
Package types
Both @graph-compose/core and @graph-compose/client export Zod schemas and inferred TypeScript types for every structure on this page. Use these for runtime validation or to type your endpoint handlers.
Import from @graph-compose/core
import {
// LLM service request body
AdkRequestBodySchema, // Zod schema
type AdkRequestBody, // TypeScript type
// LLM service response
HttpResponseOutputSchema, // Zod schema
type HttpResponseOutput, // TypeScript type
// Tool calls (in LLM response)
PendingToolCallSchema, // Zod schema
type PendingToolCall, // TypeScript type
// HITL request (in LLM response)
HitlRequestSchema, // Zod schema
type HitlRequest, // TypeScript type
// Message content (in messages array)
type AdkMessageContent, // TypeScript type
// Tool definitions (workflow config)
GlobalHttpToolDefinitionSchema,
type GlobalHttpToolDefinition,
GlobalAgentToolDefinitionSchema,
type GlobalAgentToolDefinition,
} from '@graph-compose/core'
Schema reference
| Schema | Type | Describes |
|---|---|---|
AdkRequestBodySchema | AdkRequestBody | Request body sent to your LLM service. Fields: messages, tools, state. |
HttpResponseOutputSchema | HttpResponseOutput | Response your LLM service returns. Fields: content, toolCalls, exitFlow, escalate, hitlRequest, statusCode, waitForUserInput. |
PendingToolCallSchema | PendingToolCall | A single tool call in the toolCalls array. Fields: id, function_name, function_args. |
HitlRequestSchema | HitlRequest | HITL confirmation request. Fields: prompt, actionDetails. |
GlobalHttpToolDefinitionSchema | GlobalHttpToolDefinition | HTTP tool definition in globalTools. Fields: type, id, httpConfig, activityConfig, outputKey. |
GlobalAgentToolDefinitionSchema | GlobalAgentToolDefinition | Agent tool definition in globalTools. Fields: type, id, targetAgentId, skipSummarization, activityConfig, outputKey. |
Typing your LLM service handler
Example: typed Express handler for your LLM service
import type { AdkRequestBody, HttpResponseOutput } from '@graph-compose/core'
import type { Request, Response } from 'express'
app.post('/chat', (req: Request<{}, {}, AdkRequestBody>, res: Response<HttpResponseOutput>) => {
const { messages, tools, state } = req.body
// Your LLM logic here...
res.json({
content: 'The weather in San Francisco is 72F and sunny.',
exitFlow: true,
})
})
Validating at runtime
All schemas are Zod objects, so you can parse incoming data to validate it:
Runtime validation with Zod
import { AdkRequestBodySchema, HttpResponseOutputSchema } from '@graph-compose/core'
// Validate an incoming LLM request
const parsed = AdkRequestBodySchema.parse(req.body)
// Validate your own response before sending
const response = HttpResponseOutputSchema.parse({
content: 'Done.',
exitFlow: true,
})