Your HTTP Endpoints

ADK workflows call two types of HTTP endpoint that you own and host: your LLM service and your tool endpoints. Each has a different request/response contract. This page documents both.

Two endpoint types

LLM serviceTool endpoint
Configured onhttpConfig on each LlmAgenthttpConfig on each HTTP tool in globalTools
Called whenEvery agent turn (the core orchestration loop)The agent returns toolCalls requesting a specific tool
ReceivesConversation history, available tools, session stateTool name, LLM's function arguments, correlation ID
ReturnsText, tool call requests, control signals (exitFlow, escalate, hitlRequest)Any JSON (or plain text) that makes sense for the tool

The ADK worker sits between your endpoints. It calls your LLM service, interprets the response, executes any requested tools by calling your tool endpoints, feeds the tool results back into the conversation, and repeats until your LLM service signals completion.

LLM service contract

Your LLM service is the core of every LlmAgent. The ADK worker sends it a POST request on every agent turn. Use any model (OpenAI, Anthropic, open-source) and any framework. The worker expects a specific request/response shape.

The request body is defined by AdkRequestBodySchema and the response by HttpResponseOutputSchema, both from @graph-compose/core. See Package types for all available schemas.

What your LLM service receives

The request body contains three fields:

Request body sent to your LLM service

{
  "messages": [
    { "role": "system", "content": "You are a travel booking assistant." },
    { "role": "user", "content": "Find me a flight from SFO to JFK next Friday." }
  ],
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "search_flights",
        "description": "Search for available flights",
        "parameters": {
          "type": "object",
          "properties": {
            "origin": { "type": "string" },
            "destination": { "type": "string" },
            "date": { "type": "string" }
          },
          "required": ["origin", "destination", "date"]
        }
      }
    }
  ],
  "state": {
    "user_id": "user-123"
  }
}
FieldTypeDescription
messagesarrayConversation history in OpenAI chat format. Includes system (from instructions), user, assistant, and tool messages. Grows with each turn.
toolsarrayTools available to this agent, in OpenAI function calling format. Each entry has type: "function" and a function object with name, description, and parameters. Only tools listed in the agent's tools field appear here.
stateobjectCurrent session state. Contains seed data from .withState(), outputKey values from completed agents, and system-managed keys like _user_message_count.

What your LLM service returns

Your service returns a JSON object telling the worker what to do next. There are three common response patterns:

{
  "toolCalls": [
    {
      "function_name": "search_flights",
      "function_args": {
        "origin": "SFO",
        "destination": "JFK",
        "date": "2026-03-20"
      }
    }
  ]
}
FieldTypeDescription
contentstring | nullText response. Use null for silent agents that only make tool calls. If the agent has an outputKey, this value is saved to session state when the agent finishes.
toolCallsarray | nullTool calls for the worker to execute. Each has function_name (matching a tool ID) and function_args (passed to the tool endpoint).
exitFlowbooleanSet to true when the agent's task is complete. The worker stops the agent's orchestration loop.
escalatebooleanSet to true to stop the parent container (LoopAgent or SequentialAgent) as well.
hitlRequestobject | nullPause the workflow for Human-in-the-Loop approval. Contains prompt (display text) and actionDetails (structured metadata).

Tool endpoint contract

Tool endpoints handle the work your agents delegate via toolCalls. Each HTTP tool in your workflow's globalTools array points to one of your endpoints. When your LLM service returns a tool call, the ADK worker calls the corresponding tool endpoint and feeds the result back into the conversation.

The tool call structure is defined by PendingToolCallSchema from @graph-compose/core. Tool definitions use GlobalHttpToolDefinitionSchema and GlobalAgentToolDefinitionSchema. See Package types for all available schemas.

What your tool endpoint receives

The ADK worker sends a POST request with a JSON body containing the tool name, the arguments from the agent's function_args, and a correlation ID:

Request body your tool endpoint receives

{
  "tool_name": "search_flights",
  "tool_args": {
    "origin": "SFO",
    "destination": "JFK",
    "date": "2026-03-20"
  },
  "tool_call_id": "call_abc123"
}
FieldTypeDescription
tool_namestringThe tool's ID, matching the id in your globalTools definition.
tool_argsobjectThe arguments your LLM returned in function_args. Passed through as-is.
tool_call_idstring | nullA correlation ID from the LLM's function call, used for tracking through the call/response cycle.

The worker also sends context headers with every tool request:

HeaderDescription
X-Temporal-Workflow-IDParent workflow ID.
X-Temporal-Activity-IDTemporal activity ID for this execution.
X-Temporal-AttemptRetry attempt number (starts at 1).
X-Tool-NameThe tool name being called.
X-Tool-Call-IDSame as tool_call_id in the body.

Any custom headers you define in the tool's httpConfig.headers are also included. This is where you add authentication for your tool endpoint, such as Authorization: Bearer {{ $secret('tool_api_key') }}.

What your tool endpoint returns

There is no enforced schema for tool responses. Your tool endpoint returns a standard HTTP response with any JSON body. The ADK worker parses the response body as JSON (falling back to plain text if JSON parsing fails), wraps it internally using HttpToolActivityOutputSchema, and feeds the body back to the agent as a function_response.

Example tool response

{
  "flights": [
    { "airline": "UA", "flight": "UA-201", "price": 300, "departure": "10:00 AM" },
    { "airline": "AA", "flight": "AA-1234", "price": 350, "departure": "2:00 PM" }
  ]
}

Return any JSON structure that makes sense for your tool. The agent receives the response wrapped in a function_response message in the conversation:

How the agent sees the tool result

{
  "role": "tool",
  "content": [
    {
      "function_response": {
        "name": "search_flights",
        "response": {
          "output": {
            "flights": [
              { "airline": "UA", "flight": "UA-201", "price": 300, "departure": "10:00 AM" },
              { "airline": "AA", "flight": "AA-1234", "price": 350, "departure": "2:00 PM" }
            ]
          }
        }
      }
    }
  ]
}

The output field contains whatever your endpoint returned. Only the response body is passed to the agent. HTTP status codes and headers are not included in the conversation.

Package types

Both @graph-compose/core and @graph-compose/client export Zod schemas and inferred TypeScript types for every structure on this page. Use these for runtime validation or to type your endpoint handlers.

Import from @graph-compose/core

import {
  // LLM service request body
  AdkRequestBodySchema,        // Zod schema
  type AdkRequestBody,         // TypeScript type

  // LLM service response
  HttpResponseOutputSchema,    // Zod schema
  type HttpResponseOutput,     // TypeScript type

  // Tool calls (in LLM response)
  PendingToolCallSchema,       // Zod schema
  type PendingToolCall,        // TypeScript type

  // HITL request (in LLM response)
  HitlRequestSchema,           // Zod schema
  type HitlRequest,            // TypeScript type

  // Message content (in messages array)
  type AdkMessageContent,      // TypeScript type

  // Tool definitions (workflow config)
  GlobalHttpToolDefinitionSchema,
  type GlobalHttpToolDefinition,
  GlobalAgentToolDefinitionSchema,
  type GlobalAgentToolDefinition,
} from '@graph-compose/core'

Schema reference

SchemaTypeDescribes
AdkRequestBodySchemaAdkRequestBodyRequest body sent to your LLM service. Fields: messages, tools, state.
HttpResponseOutputSchemaHttpResponseOutputResponse your LLM service returns. Fields: content, toolCalls, exitFlow, escalate, hitlRequest, statusCode, waitForUserInput.
PendingToolCallSchemaPendingToolCallA single tool call in the toolCalls array. Fields: id, function_name, function_args.
HitlRequestSchemaHitlRequestHITL confirmation request. Fields: prompt, actionDetails.
GlobalHttpToolDefinitionSchemaGlobalHttpToolDefinitionHTTP tool definition in globalTools. Fields: type, id, httpConfig, activityConfig, outputKey.
GlobalAgentToolDefinitionSchemaGlobalAgentToolDefinitionAgent tool definition in globalTools. Fields: type, id, targetAgentId, skipSummarization, activityConfig, outputKey.

Typing your LLM service handler

Example: typed Express handler for your LLM service

import type { AdkRequestBody, HttpResponseOutput } from '@graph-compose/core'
import type { Request, Response } from 'express'

app.post('/chat', (req: Request<{}, {}, AdkRequestBody>, res: Response<HttpResponseOutput>) => {
  const { messages, tools, state } = req.body

  // Your LLM logic here...

  res.json({
    content: 'The weather in San Francisco is 72F and sunny.',
    exitFlow: true,
  })
})

Validating at runtime

All schemas are Zod objects, so you can parse incoming data to validate it:

Runtime validation with Zod

import { AdkRequestBodySchema, HttpResponseOutputSchema } from '@graph-compose/core'

// Validate an incoming LLM request
const parsed = AdkRequestBodySchema.parse(req.body)

// Validate your own response before sending
const response = HttpResponseOutputSchema.parse({
  content: 'Done.',
  exitFlow: true,
})

Next steps