Skip to main content

How to pass tool outputs to chat models

Prerequisites

This guide assumes familiarity with the following concepts:

If weโ€™re using the model-generated tool invocations to actually call tools and want to pass the tool results back to the model, we can do so using ToolMessages and ToolCalls. First, letโ€™s define some tools and a chat model instance.

import { z } from "zod";
import { tool } from "@langchain/core/tools";

const addTool = tool(
async ({ a, b }) => {
return a + b;
},
{
name: "add",
schema: z.object({
a: z.number(),
b: z.number(),
}),
description: "Adds a and b.",
}
);

const multiplyTool = tool(
async ({ a, b }) => {
return a * b;
},
{
name: "multiply",
schema: z.object({
a: z.number(),
b: z.number(),
}),
description: "Multiplies a and b.",
}
);

const tools = [addTool, multiplyTool];

Pick your chat model:

Install dependencies

yarn add @langchain/openai 

Add environment variables

OPENAI_API_KEY=your-api-key

Instantiate the model

import { ChatOpenAI } from "@langchain/openai";

const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0
});

If we invoke a tool with a ToolCall, weโ€™ll automatically get back a ToolMessage that can be fed back to the model:

Compatibility

This functionality requires @langchain/core>=0.2.16. Please see here for a guide on upgrading.

import { HumanMessage } from "@langchain/core/messages";

const messages = [new HumanMessage("What is 3 * 12? Also, what is 11 + 49?")];

const aiMessage = await llmWithTools.invoke(messages);

messages.push(aiMessage);

const toolsByName = {
add: addTool,
multiply: multiplyTool,
};

for (const toolCall of aiMessage.tool_calls) {
const selectedTool = toolsByName[toolCall.name];
const toolMessage = await selectedTool.invoke(toolCall);
messages.push(toolMessage);
}

console.log(messages);
[
HumanMessage {
lc_serializable: true,
lc_kwargs: {
content: 'What is 3 * 12? Also, what is 11 + 49?',
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ 'langchain_core', 'messages' ],
content: 'What is 3 * 12? Also, what is 11 + 49?',
name: undefined,
additional_kwargs: {},
response_metadata: {},
id: undefined
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: '',
tool_calls: [Array],
invalid_tool_calls: [],
additional_kwargs: [Object],
id: 'chatcmpl-9llAzVKdHCJkcUCnwGx62bqesSJPB',
response_metadata: {}
},
lc_namespace: [ 'langchain_core', 'messages' ],
content: '',
name: undefined,
additional_kwargs: { function_call: undefined, tool_calls: [Array] },
response_metadata: { tokenUsage: [Object], finish_reason: 'tool_calls' },
id: 'chatcmpl-9llAzVKdHCJkcUCnwGx62bqesSJPB',
tool_calls: [ [Object], [Object] ],
invalid_tool_calls: [],
usage_metadata: { input_tokens: 87, output_tokens: 50, total_tokens: 137 }
},
ToolMessage {
lc_serializable: true,
lc_kwargs: {
content: '36',
artifact: undefined,
tool_call_id: 'call_7P5ZjvqWc7jrXjWDkhZ6MU4b',
name: 'multiply',
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ 'langchain_core', 'messages' ],
content: '36',
name: 'multiply',
additional_kwargs: {},
response_metadata: {},
id: undefined,
tool_call_id: 'call_7P5ZjvqWc7jrXjWDkhZ6MU4b',
artifact: undefined
},
ToolMessage {
lc_serializable: true,
lc_kwargs: {
content: '60',
artifact: undefined,
tool_call_id: 'call_jbyowegkI0coHbnnHs7HLELC',
name: 'add',
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ 'langchain_core', 'messages' ],
content: '60',
name: 'add',
additional_kwargs: {},
response_metadata: {},
id: undefined,
tool_call_id: 'call_jbyowegkI0coHbnnHs7HLELC',
artifact: undefined
}
]
await llmWithTools.invoke(messages);
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: '3 * 12 is 36, and 11 + 49 is 60.',
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: { function_call: undefined, tool_calls: undefined },
id: 'chatcmpl-9llB0VVQNdufqhJHHtY9yCPeQeKLZ',
response_metadata: {}
},
lc_namespace: [ 'langchain_core', 'messages' ],
content: '3 * 12 is 36, and 11 + 49 is 60.',
name: undefined,
additional_kwargs: { function_call: undefined, tool_calls: undefined },
response_metadata: {
tokenUsage: { completionTokens: 19, promptTokens: 153, totalTokens: 172 },
finish_reason: 'stop'
},
id: 'chatcmpl-9llB0VVQNdufqhJHHtY9yCPeQeKLZ',
tool_calls: [],
invalid_tool_calls: [],
usage_metadata: { input_tokens: 153, output_tokens: 19, total_tokens: 172 }
}

Note that we pass back the same tool_call_id in the ToolMessage as what we receive from the model in order to help the model match tool responses with tool calls.

Youโ€™ve now seen how to pass tool calls back to a model.

These guides may interest you next:


Was this page helpful?


You can also leave detailed feedback on GitHub.