Fine-tuning Models

A list of all the models available for fine-tuning.

The following models are available to use with our fine-tuning API. Get started with fine-tuning a model!

  • Training Precision Type indicates the precision type used during training for each model.
    • AMP (Automated Mixed Precision): AMP allows the training speed to be faster with less memory usage while preserving convergence behavior compared to using float32. Learn more about AMP in this PyTorch blog.
    • bf16 (bfloat 16): This uses bf16 for all weights. Some large models on our platform uses full bf16 training for better memory usage and training speed.
  • Long-context fine-tuning of Llama 3.1 (8B) Reference, Llama 3.1 (8B) Reference, Llama 3.1 (70B) Reference, Llama 3.1 Instruct (70B) Reference for context sizes of 32K-131K is only supported using the LoRA method.
  • For Llama 3.1 (405B) Fine-tuning, please contact us.

LoRA Fine-tuning

OrganizationModel NameModel String for APIContext LengthMax Batch SizeMin Batch SizeTraining Precision Type*
Googlegoogle/gemma-3-27b-itgoogle/gemma-3-27b-it8192248AMP
Googlegoogle/gemma-3-27b-ptgoogle/gemma-3-27b-pt8192248AMP
Googlegoogle/gemma-3-12b-itgoogle/gemma-3-12b-it8192248AMP
Googlegoogle/gemma-3-12b-ptgoogle/gemma-3-12b-pt8192248AMP
Googlegoogle/gemma-3-4b-itgoogle/gemma-3-4b-it8192408AMP
Googlegoogle/gemma-3-4b-ptgoogle/gemma-3-4b-pt8192408AMP
Googlegoogle/gemma-3-1b-itgoogle/gemma-3-1b-it8192488AMP
Googlegoogle/gemma-3-1b-ptgoogle/gemma-3-1b-pt8192488AMP
DeepseekDeepSeek-R1-Distill-Llama-70Bdeepseek-ai/DeepSeek-R1-Distill-Llama-70B819288AMP
DeepseekDeepSeek-R1-Distill-Qwen-14Bdeepseek-ai/DeepSeek-R1-Distill-Qwen-14B8192408AMP
DeepseekDeepSeek-R1-Distill-Qwen-1.5Bdeepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B8192488AMP
MetaLlama 3.3 Instruct (70B) Referencemeta-llama/Llama-3.3-70B-Instruct-Reference819288AMP
MetaLlama 3.2 Instruct (3B)meta-llama/Llama-3.2-3B-Instruct8192408AMP
MetaLlama 3.2 Instruct (1B)meta-llama/Llama-3.2-1B-Instruct8192408AMP
MetaLlama 3.1 (8B) Referencemeta-llama/Meta-Llama-3.1-8B-Reference8192328AMP
MetaLlama 3.1 Instruct (8B) Referencemeta-llama/Meta-Llama-3.1-8B-Instruct-Reference8192328AMP
MetaLlama 3.1 (70B) Referencemeta-llama/Meta-Llama-3.1-70B-Reference819288AMP
MetaLlama 3.1 Instruct (70B) Referencemeta-llama/Meta-Llama-3.1-70B-Instruct-Reference819288AMP
MetaLlama 3 (8B)meta-llama/Meta-Llama-3-8B8192328AMP
MetaLlama 3 Instruct (8B)meta-llama/Meta-Llama-3-8B-Instruct8192328AMP
MetaLlama 3 Instruct (70B)meta-llama/Meta-Llama-3-70B-Instruct819288AMP
MetaLlama-2 Chat (7B)togethercomputer/llama-2-7b-chat40961288AMP
MetaCodeLlama (7B)codellama/CodeLlama-7b-hf16384328AMP
Mistral AIMixtral-8x7B (46.7B)mistralai/Mixtral-8x7B-v0.132768168AMP
Mistral AIMixtral-8x7B Instruct (46.7B)mistralai/Mixtral-8x7B-Instruct-v0.132768168AMP
Mistral AIMistral 7B Instruct v0.2mistralai/Mistral-7B-Instruct-v0.232768168AMP
Mistral AIMistral 7B v0.1mistralai/Mistral-7B-v0.18192648AMP
QwenQwen2.5-72BQwen/Qwen2.5-72B-Instruct8192168AMP
QwenQwen2.5-14BQwen/Qwen2.5-14B-Instruct8192408AMP
QwenQwen2-1.5BQwen/Qwen2-1.5B8192488AMP
QwenQwen2-1.5B-InstructQwen/Qwen2-1.5B-Instruct8192488AMP
QwenQwen2-7BQwen/Qwen2-7B8192328AMP
QwenQwen2-7B-InstructQwen/Qwen2-7B-Instruct8192328AMP
QwenQwen2-72BQwen/Qwen2-72B819288AMP
QwenQwen2-72B-InstructQwen/Qwen2-72B-Instruct819288AMP
TekniumOpenHermes 2.5 Mistral 7Bteknium/OpenHermes-2p5-Mistral-7B8192648AMP

LoRA Long-context Fine-tuning

OrganizationModel NameModel String for APIContext LengthMax Batch SizeMin Batch SizeTraining Precision Type*
Deepseekdeepseek-ai/DeepSeek-R1-Distill-Llama-70Bdeepseek-ai/DeepSeek-R1-Distill-Llama-70B-32k327681*1*AMP
MetaLlama 3.3 Instruct (70B) Referencemeta-llama/Llama-3.3-70B-32k-Instruct-Reference327681*1*AMP
MetaLlama 3.1 (8B) Referencemeta-llama/Meta-Llama-3.1-8B-32k-Reference3276888AMP
MetaLlama 3.1 Instruct (8B) Referencemeta-llama/Meta-Llama-3.1-8B-32k-Instruct-Reference3276888AMP
MetaLlama 3.1 (70B) Referencemeta-llama/Meta-Llama-3.1-70B-32k-Reference327681*1*AMP
MetaLlama 3.1 Instruct (70B) Referencemeta-llama/Meta-Llama-3.1-70B-32k-Instruct-Reference327681*1*AMP

1* -- Gradient accumulation 8 is used, so effectively you will get batch size 8 (iteration time is slower).

Full Fine-tuning

OrganizationModel NameModel String for APIContext LengthMax Batch SizeMin Batch SizeTraining Precision Type*
Googlegoogle/gemma-3-27b-itgoogle/gemma-3-27b-it8192168AMP
Googlegoogle/gemma-3-27b-ptgoogle/gemma-3-27b-pt8192168AMP
Googlegoogle/gemma-3-12b-itgoogle/gemma-3-12b-it8192248AMP
Googlegoogle/gemma-3-12b-ptgoogle/gemma-3-12b-pt8192248AMP
Googlegoogle/gemma-3-4b-itgoogle/gemma-3-4b-it8192408AMP
Googlegoogle/gemma-3-4b-ptgoogle/gemma-3-4b-pt8192408AMP
Googlegoogle/gemma-3-1b-itgoogle/gemma-3-1b-it8192488AMP
Googlegoogle/gemma-3-1b-ptgoogle/gemma-3-1b-pt8192488AMP
DeepseekDeepSeek-R1-Distill-Llama-70Bdeepseek-ai/DeepSeek-R1-Distill-Llama-70B81921616bf16
DeepseekDeepSeek-R1-Distill-Qwen-14Bdeepseek-ai/DeepSeek-R1-Distill-Qwen-14B8192328AMP
DeepseekDeepSeek-R1-Distill-Qwen-1.5Bdeepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B8192488AMP
MetaLlama 3.3 Instruct (70B) Referencemeta-llama/Llama-3.3-70B-Instruct-Reference81921616bf16
MetaLlama 3.1 (8B) Referencemeta-llama/Meta-Llama-3.1-8B-Reference8192248AMP
MetaLlama 3.1 Instruct (8B) Referencemeta-llama/Meta-Llama-3.1-8B-Instruct-Reference8192248AMP
MetaLlama 3.1 (70B) Referencemeta-llama/Meta-Llama-3.1-70B-Reference81921616bf16
MetaLlama 3.1 Instruct (70B) Referencemeta-llama/Meta-Llama-3.1-70B-Instruct-Reference81921616bf16
MetaLlama 3 (8B)meta-llama/Meta-Llama-3-8B8192248AMP
MetaLlama 3 Instruct (8B)meta-llama/Meta-Llama-3-8B-Instruct8192248AMP
MetaLlama 3 Instruct (70B)meta-llama/Meta-Llama-3-70B-Instruct81921616bf16
MetaLlama-2 Chat (7B)togethercomputer/llama-2-7b-chat4096968AMP
MetaCodeLlama (7B)codellama/CodeLlama-7b-hf16384328AMP
Mistral AIMixtral-8x7B (46.7B)mistralai/Mixtral-8x7B-v0.1327681616bf16
Mistral AIMixtral-8x7B Instruct (46.7B)mistralai/Mixtral-8x7B-Instruct-v0.1327681616bf16
Mistral AIMistral 7B Instruct v0.2mistralai/Mistral-7B-Instruct-v0.232768168AMP
Mistral AIMistral 7B v0.1mistralai/Mistral-7B-v0.18192648AMP
QwenQwen2-1.5BQwen/Qwen2-1.5B8192488AMP
QwenQwen2-1.5B-InstructQwen/Qwen2-1.5B-Instruct8192488AMP
QwenQwen2-7BQwen/Qwen2-7B8192248AMP
QwenQwen2-7B-InstructQwen/Qwen2-7B-Instruct8192248AMP
TekniumOpenHermes 2.5 Mistral 7Bteknium/OpenHermes-2p5-Mistral-7B8192648AMP


Request a model