Learn how to get LLMs to respond to queries with named functions and structured arguments.
tools
key. If the LLM decides one or more of the available functions should be used to answer a query, it will respond with an array of the function names and their arguments to call in the tool_calls
key of its response.
You can then use the data from tool_calls
to invoke the named functions and get the results, which you can then provide directly to the user or pass them back into subsequent LLM queries for further processing.
moonshotai/Kimi-K2-Instruct
meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8
meta-llama/Llama-4-Scout-17B-16E-Instruct
meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo
meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo
meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo
meta-llama/Llama-3.3-70B-Instruct-Turbo
meta-llama/Llama-3.2-3B-Instruct-Turbo
Qwen/Qwen2.5-7B-Instruct-Turbo
Qwen/Qwen2.5-72B-Instruct-Turbo
Qwen/Qwen3-235B-A22B-fp8-tput
deepseek-ai/DeepSeek-V3
mistralai/Mistral-Small-24B-Instruct-2501
arcee-ai/virtuoso-large
arcee-ai/virtuoso-medium-v2
arcee-ai/caller
get_current_weather
function which takes in two named arguments,location
and unit
:
tools
key alongside the user’s query. Let’s suppose the user asks, “What is the current temperature of New York, San Francisco and Chicago?”
tool_calls
key of the LLM’s response will look like this:
tools
will automatically attempt to use the most appropriate one when generating responses.
If you’d like to manually select a specific tool to use for a completion, pass in the tool’s name to the tool_choice
parameter: