download dots
Tool Use

Tool Use

10 min read
On this page (17)

Definition: Tool use is the capability that allows a large language model to invoke external functions, APIs, databases, or services instead of only generating text. It is the single feature that separates a chatbot from an agent. A model with tool use can search the web, query a database, send an email, trigger an automation, write to a file, or call another agent โ€” and then incorporate the result back into its reasoning.

Without tool use, an LLM is a closed box: it can only talk about the world using facts frozen at its training cutoff. With tool use, the model becomes an operator. It can read live data, change state in external systems, and pursue multi-step goals that no single prompt could satisfy.

The Tool Use Loop at a Glance

decides to act HTTP / DB / SDK goal reached User Goal LLM Reasons Tool Call JSON Runtime Dispatch External System Observation Final Answer

The loop โ€” reason, call, observe, repeat โ€” is the entire pattern. Every framework built on top of it (LangChain, CrewAI, Anthropic's SDK, OpenAI Agents API, Taskade's AI Agents v2) is a variation on these four moves.

EVE calling email and Slack tools during a Genesis app build

Why Tool Use Defines the Agentic Era

Every meaningful AI agent in 2026 is defined by the tools it has access to. A coding agent's power comes from its ability to run a shell, edit files, and execute tests. A research agent's power comes from its ability to fetch web pages, query arXiv, and read PDFs. A Taskade Genesis app comes alive because its built-in agent can read your project, update a task, draft an email, send a Slack message, and post a Stripe payment link โ€” all from one conversation.

The shift is foundational. For the first decade of deep learning, progress came from scaling model parameters. Starting around 2023, progress came from scaling what the model could do. The same GPT-class model with ten well-chosen tools outperforms a larger model with none on almost every real-world benchmark.

Anthropic's Model Context Protocol and OpenAI's function-calling API both exist for one reason: to make tool use a first-class primitive that works across vendors, frameworks, and agent runtimes.

How Tool Use Works

Tool use follows a predictable loop that every major framework โ€” LangChain, CrewAI, Anthropic's SDK, OpenAI's Agents API โ€” implements the same way.

Step 1 โ€” Tool declaration. The developer registers a list of available tools with the model. Each tool has a name, a natural-language description, and a JSON schema describing its parameters. Example: search_web(query: string, num_results: int = 5) with description "Search the public web and return titles, URLs, and snippets."

Step 2 โ€” Model decision. When the user sends a message, the model reads both the prompt and the tool catalog. If the model decides a tool can help, it emits a structured tool call โ€” a JSON object naming the tool and supplying arguments โ€” instead of (or in addition to) plain text.

Step 3 โ€” Tool execution. The agent runtime intercepts the tool call, validates the arguments against the schema, and executes the underlying function. The raw return value (search results, database rows, HTTP response) is serialized as a tool result.

Step 4 โ€” Observation and continuation. The tool result is appended to the conversation and sent back to the model. The model reads the result and decides what to do next: call another tool, refine its approach, or produce a final answer for the user.

This loop โ€” think, call, observe, repeat โ€” is the same pattern formalized as the ReAct agent architecture in 2022, and it underlies every production agent system shipping today.

The Anatomy of a Tool

A well-designed tool has five parts:

Part Purpose Example
Name Short identifier the model references send_slack_message
Description Natural-language hint the model uses to pick the right tool "Send a message to a Slack channel. Use when the user asks to notify a team."
Parameters Typed, validated inputs (JSON Schema) {channel: string, text: string, thread_ts?: string}
Return shape What the tool hands back {ok: boolean, message_url: string}
Error contract How failures surface {error: "channel_not_found", retryable: false}

Good tool descriptions are prompts in disguise. The model never sees your function body โ€” only the description and the schema. If the description is vague, the model will pick the wrong tool or pass the wrong arguments. If the schema is loose, the model will hallucinate parameters that do not exist.

Types of Tools

Category Examples Agent Use
Retrieval Web search, vector database query, document reader Ground answers in live or private data
Computation Calculator, code interpreter, SQL runner Deterministic math, data transformation
Action Send email, create ticket, post payment Change state in external systems
Communication Call another agent, delegate subtask, ask human Multi-agent orchestration and human-in-the-loop
File I/O Read file, write file, edit code Software engineering and document workflows
Observation Screenshot, DOM snapshot, log tail Browser agents and monitoring

Most production agents expose 10 to 30 tools. Taskade Genesis agents ship with 22+ built-in tools covering all six categories, plus support for custom tools that developers define with a JSON schema โ€” the same pattern as function calling in the OpenAI and Anthropic SDKs.

The Anatomy of a Tool Call

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ tool_call                                       โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ id:        call_abc123                          โ”‚
โ”‚ name:      send_slack_message                   โ”‚
โ”‚ arguments: {                                    โ”‚
โ”‚   "channel": "C08X...",                         โ”‚
โ”‚   "text":    "Invoice paid by Acme Corp"        โ”‚
โ”‚ }                                               โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                   โ”‚
                   โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ tool_result                                     โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ tool_call_id: call_abc123                       โ”‚
โ”‚ content: {                                      โ”‚
โ”‚   "ok": true,                                   โ”‚
โ”‚   "message_url": "https://slack.com/..."        โ”‚
โ”‚ }                                               โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

The JSON shape is standardized across function calling APIs (OpenAI, Anthropic, Google) and across the Model Context Protocol. The model never runs code. It produces an instruction. Your runtime runs it.

Tool Use in Taskade Genesis

Inside Taskade Genesis, every agent you build โ€” and the EVE meta-agent that builds apps for you โ€” operates on this tool-use loop. EVE's tool belt includes project read/write, task management, database queries, automation triggers, web fetch, and the new Ask Questions tool that pauses a build to ask you a clarifying question mid-flight.

Because Taskade implements the Model Context Protocol on both sides โ€” Taskade as MCP Server and Taskade as MCP Client โ€” your agents can call external MCP tools (Notion, Linear, GitHub), and external clients (Claude Desktop, Cursor, VS Code) can call Taskade tools. One protocol, two directions.

Tool Use vs Function Calling vs Plugins

These three terms describe the same underlying capability with different framings:

  • Function calling is the OpenAI name for the specific API feature that emits structured tool calls. It is the mechanism.
  • Tool use is the broader behavior of an agent choosing and invoking external functions during reasoning. It is the behavior.
  • Plugins (ChatGPT Plugins, 2023) were an early user-facing framing that has since been replaced by MCP and direct tool APIs. It is the legacy term.

In 2026, most agent frameworks have converged on "tools" as the noun and "tool use" as the verb, with function calling as the wire format.

Common Failure Modes

Tool use looks simple in a demo and breaks in production for predictable reasons:

  1. Too many tools โ€” Models lose accuracy when given more than roughly 30 tools. Group related functions into a single tool with a action parameter, or use tiered tool menus.
  2. Ambiguous descriptions โ€” "Get data" is a bad tool name. query_customer_orders(customer_id) is a good one. Descriptions should name the use case, not the implementation.
  3. No error contract โ€” When a tool fails silently, the model assumes success and continues. Return structured errors with retryable hints so the agent can react.
  4. Forgetting idempotency โ€” If the model retries a send_payment tool, you may charge a customer twice. Expose an idempotency_key parameter or enforce it at the tool boundary.
  5. Schema drift โ€” When the tool schema changes, existing prompts may break. Version your tools and migrate agents deliberately.

Taskade's durable execution layer solves most of these at the infrastructure level โ€” retries are deterministic, idempotency is enforced, and every tool call lands in a replayable event log you can inspect on the automation Runs tab.

Frequently Asked Questions About Tool Use

What is tool use in LLMs?

Tool use is the ability of a large language model to call external functions โ€” search the web, query a database, send an email, trigger an automation โ€” instead of only producing text. It turns a chatbot into an agent that can act on the world.

How is tool use different from function calling?

Function calling is the specific API feature (pioneered by OpenAI in 2023) where the model emits a structured JSON tool call instead of free-form text. Tool use is the broader agent behavior of choosing and invoking tools during reasoning. Function calling is the mechanism; tool use is the behavior.

What tools do Taskade Genesis agents use?

Taskade agents ship with 22+ built-in tools covering project management, task creation, database queries, web fetch, automation triggers, Slack messaging, and more. Developers can register custom tools via JSON schema, and any MCP-compatible tool can be plugged in directly through Taskade's MCP client.

How many tools should an agent have?

Most production agents work best with 10โ€“30 well-described tools. Beyond roughly 30, model accuracy drops because the model struggles to pick the right one. If you need more, group related tools into a single parameterized tool or use tiered menus.

Can agents call other agents as tools?

Yes โ€” this is the foundation of multi-agent systems. When an agent treats another agent as a callable tool, you get hierarchical reasoning: a manager agent delegates subtasks, and a specialist agent handles each one. Taskade supports this pattern through AI Agents v2.

Further Reading