Introduction
Function calling fine-tuning allows you to adapt models to reliably invoke tools and structured functions in response to user queries. This is useful for building agents and models that can reliably call APIs. This guide covers the specific steps for function calling fine-tuning. For general fine-tuning concepts, environment setup, and hyperparameter details, refer to the Fine-tuning Guide.Quick Links
- Dataset Requirements
- Supported Models
- Check and Upload Dataset
- Start a Fine-tuning Job
- Monitor Progress
- Deploy Your Model
Function Calling Dataset
Dataset Requirements:- Format:
.jsonlfile - Supported types: Conversational, Preferential — more details on their purpose here
- Each line may contain a
toolsfield listing the tools the model can use - Assistant messages can include
tool_calls(structured invocation requests) instead ofcontent - Tool call results are provided via messages with the
toolrole
Conversation Tool Calling Format
This is what one row/example from the function calling dataset looks like in conversation format:Preference Tool Calling Format
For preference fine-tuning, thetools field should be defined inside input:
Supported Models
The following models support function calling fine-tuning:| Organization | Model Name | Model String for API |
|---|---|---|
| Qwen | Qwen 2.5 1.5B | Qwen/Qwen2.5-1.5B |
| Qwen | Qwen 2.5 1.5B Instruct | Qwen/Qwen2.5-1.5B-Instruct |
| Qwen | Qwen 2.5 3B | Qwen/Qwen2.5-3B |
| Qwen | Qwen 2.5 3B Instruct | Qwen/Qwen2.5-3B-Instruct |
| Qwen | Qwen 2.5 7B | Qwen/Qwen2.5-7B |
| Qwen | Qwen 2.5 7B Instruct | Qwen/Qwen2.5-7B-Instruct |
| Qwen | Qwen 2.5 14B | Qwen/Qwen2.5-14B |
| Qwen | Qwen 2.5 14B Instruct | Qwen/Qwen2.5-14B-Instruct |
| Qwen | Qwen 2.5 32B | Qwen/Qwen2.5-32B |
| Qwen | Qwen 2.5 32B Instruct | Qwen/Qwen2.5-32B-Instruct |
| Qwen | Qwen 2.5 72B | Qwen/Qwen2.5-72B |
| Qwen | Qwen 2.5 72B Instruct | Qwen/Qwen2.5-72B-Instruct |
| Qwen | Qwen 3 0.6B | Qwen/Qwen3-0.6B |
| Qwen | Qwen 3 1.7B | Qwen/Qwen3-1.7B |
| Qwen | Qwen 3 4B | Qwen/Qwen3-4B |
| Qwen | Qwen 3 8B | Qwen/Qwen3-8B |
| Qwen | Qwen 3 14B | Qwen/Qwen3-14B |
| Qwen | Qwen 3 32B | Qwen/Qwen3-32B |
| Qwen | Qwen 3 32B 16k | Qwen/Qwen3-32B-16k |
| Qwen | Qwen 3 30B A3B | Qwen/Qwen3-30B-A3B |
| Qwen | Qwen 3 30B A3B Instruct 2507 | Qwen/Qwen3-30B-A3B-Instruct-2507 |
| Qwen | Qwen 3 235B A22B | Qwen/Qwen3-235B-A22B |
| Qwen | Qwen 3 235B A22B Instruct 2507 | Qwen/Qwen3-235B-A22B-Instruct-2507 |
| Qwen | Qwen 3 VL 8B Instruct | Qwen/Qwen3-VL-8B-Instruct |
| Qwen | Qwen 3 VL 32B Instruct | Qwen/Qwen3-VL-32B-Instruct |
| Qwen | Qwen 3 VL 30B A3B Instruct | Qwen/Qwen3-VL-30B-A3B-Instruct |
| Qwen | Qwen 3 VL 235B A22B Instruct | Qwen/Qwen3-VL-235B-A22B-Instruct |
| Qwen | Qwen 3 Coder 30B A3B Instruct | Qwen/Qwen3-Coder-30B-A3B-Instruct |
| Qwen | Qwen 3 Coder 480B A35B Instruct | Qwen/Qwen3-Coder-480B-A35B-Instruct |
| Qwen | Qwen 3 Next 80B A3B Instruct | Qwen/Qwen3-Next-80B-A3B-Instruct |
| Qwen | Qwen 3 Next 80B A3B Thinking | Qwen/Qwen3-Next-80B-A3B-Thinking |
| Moonshot AI | Kimi K2 Instruct | moonshotai/Kimi-K2-Instruct |
| Moonshot AI | Kimi K2 Thinking | moonshotai/Kimi-K2-Thinking |
| Moonshot AI | Kimi K2 Base | moonshotai/Kimi-K2-Base |
| Moonshot AI | Kimi K2 Instruct 0905 | moonshotai/Kimi-K2-Instruct-0905 |
| Moonshot AI | Kimi K2.5 | moonshotai/Kimi-K2.5 |
| Z.ai | GLM 4.6 | zai-org/GLM-4.6 |
| Z.ai | GLM 4.7 | zai-org/GLM-4.7 |
Check and Upload Dataset
To upload your data, use the CLI or our Python library:file-) to start your fine-tuning job, so store it somewhere before moving on.
Starting a Fine-tuning Job
We support both LoRA and full fine-tuning for function calling models. For an exhaustive list of all the available fine-tuning parameters, refer to the Together AI Fine-tuning API Reference.LoRA Fine-tuning (Recommended)
Full Fine-tuning
Monitoring Your Fine-tuning Job
Fine-tuning can take time depending on the model size, dataset size, and hyperparameters. Your job will progress through several states: Pending, Queued, Running, Uploading, and Completed. Dashboard Monitoring You can monitor your job on the Together AI jobs dashboard. Check Status via APIUsing Your Fine-tuned Model
Once your fine-tuning job completes, your model will be available for use. You can view your fine-tuned models in your models dashboard.Dedicated Endpoint Deployment
You can deploy your fine-tuned model on a dedicated endpoint for production use:- Visit your models dashboard
- Find your fine-tuned model and click ”+ CREATE DEDICATED ENDPOINT”
- Select your hardware configuration and scaling options
- Click “DEPLOY”