Tools

Tools extend what ADK agents can do. An agent calls a tool by returning toolCalls in its response, and Graph Compose executes the tool and feeds the result back into the conversation. Two tool types are available: HTTP tools for calling external APIs, and agent tools for delegating to another agent.

How tools work

Tools are defined in the globalTools array at the workflow level. Each LlmAgent references the tools it can use by ID in its tools field.

flowchart LR A[LLM Agent] -->|returns toolCalls| B[ADK Worker] B -->|executes| C[Tool Endpoint] C -->|HTTP response| B B -->|adds function_response| A style A fill:#4F46E5,stroke:#4338CA,color:#fff,rx:10 style B fill:#8B5CF6,stroke:#7C3AED,color:#fff,rx:10 style C fill:#10B981,stroke:#059669,color:#fff,rx:10

The execution flow:

  1. Your LLM service returns a response with toolCalls containing function_name and function_args.
  2. The ADK worker looks up the tool by function_name in the workflow's globalTools.
  3. For HTTP tools, the worker makes an HTTP request to the tool's endpoint. For agent tools, it runs the target agent.
  4. The tool result is added to the conversation history as a function_response message.
  5. Your LLM service is called again with the updated conversation, including the tool result.

An agent can call multiple tools in a single response. The ADK worker executes each tool and adds all results before calling your LLM service again.

HTTP tools

An HTTP tool makes an HTTP request to an external endpoint when invoked. Use HTTP tools for API calls, data lookups, and any external service integration.

import { createHttpTool } from '@graph-compose/client/adk-helpers'

createHttpTool({
  id: 'search_flights',
  httpConfig: {
    url: 'https://api.flights.example.com/search',
    method: 'POST',
    headers: {
      Authorization: 'Bearer {{ $secret("flights_api_key") }}',
    },
  },
  outputKey: 'flight_results',
  activityConfig: {
    startToCloseTimeout: '30s',
    retryPolicy: { maximumAttempts: 3 },
  },
})
FieldTypeRequiredDescription
idstringYesUnique tool identifier. Must match the function_name the agent uses in toolCalls.
httpConfigobjectYesHTTP configuration: url, method, optional headers and body.
outputKeystringNoIf set, the tool's response is saved to session_state[outputKey].
activityConfigobjectNoTemporal activity settings: timeouts and retry policy.

Your tool endpoint contract

When the ADK worker calls your HTTP tool, it sends a POST request with the tool name, the LLM's function_args, and a correlation ID. Your endpoint returns any JSON response, which the worker wraps in a function_response and feeds back to the agent.

The full request body, response format, context headers, and error handling are documented in Your HTTP Endpoints: Tool endpoint contract.

Agent tools

An agent tool delegates work to another agent in the workflow. When the calling agent invokes an agent tool, the target agent runs its full lifecycle (LLM calls, tool usage, etc.) and its final text response is returned to the calling agent.

import { createAgentTool } from '@graph-compose/client/adk-helpers'

createAgentTool({
  id: 'flight_specialist_tool',
  targetAgentId: 'FlightSpecialist',
  outputKey: 'specialist_result',
})
FieldTypeRequiredDescription
idstringYesUnique tool identifier.
targetAgentIdstringYesID of the agent (from the workflow's agents list) to delegate to.
outputKeystringNoIf set, the target agent's final text response is saved to session state.
skipSummarizationbooleanNoIf true, the target agent's raw response is returned without summarization. Defaults to false.

How agent tools differ from orchestrators

Agent tools and orchestrator agents (Sequential, Parallel, Loop) both compose multiple agents, but they serve different purposes:

  • Agent tools enable dynamic delegation. The calling agent decides at runtime whether to invoke the specialist, based on the conversation. This is the coordinator-dispatcher pattern.
  • Orchestrator agents enforce structured flows. Sub-agents always execute in the defined order (sequential), concurrently (parallel), or repeatedly (loop).

See Multi-Agent Orchestration for detailed patterns.

Tool response format

When a tool executes, its result is added to the conversation as a function_response message. Your LLM service receives it in the messages array on the next call.

The two tool types produce different response shapes:

Tool typeresponse field contains
HTTP tool{ "output": <your endpoint's response body> }
Agent tool{ "result": "<target agent's final text>" }

For HTTP tools, the output field contains whatever your endpoint returned (see What your tool endpoint returns). For agent tools, the result field contains the target agent's final text content:

Agent tool result in conversation

{
  "role": "tool",
  "content": [
    {
      "function_response": {
        "name": "flight_specialist_tool",
        "response": {
          "result": "Found 2 flights to SFO. UA at $300 (10:00 AM) and AA at $350 (2:00 PM)."
        }
      }
    }
  ]
}

Saving tool results with outputKey

When a tool has outputKey configured, its response is saved to session state. This is useful when multiple agents need access to the same tool result without re-executing the tool.

For HTTP tools, the complete response body is stored:

HTTP tool output in session state

{
  "state": {
    "flight_results": {
      "flights": [
        { "airline": "UA", "price": 300 },
        { "airline": "AA", "price": 350 }
      ]
    }
  }
}

For agent tools, the target agent's final text response is stored:

Agent tool output in session state

{
  "state": {
    "specialist_result": "Found 2 flights to SFO..."
  }
}

Best practices

  • Use descriptive tool IDs that match the action: search_flights, validate_address, not tool_1.
  • Set outputKey on tools whose results are needed by agents that do not directly call the tool.
  • Configure activityConfig with timeouts for tools that call slow external APIs.
  • Keep tool endpoints focused on a single action. If a tool needs to do multiple things, consider using an agent tool with a dedicated agent.
  • Use agent tools for complex delegations that require LLM reasoning. Use HTTP tools for deterministic API calls.

Next steps