Fine-tuning Models

A list of all the models available for fine-tuning.

The following models are available to use with our fine-tuning API. Get started with fine-tuning a model!

  • Training Precision Type indicates the precision type used during training for each model.
    • AMP (Automated Mixed Precision): AMP allows the training speed to be faster with less memory usage while preserving convergence behavior compared to using float32. Learn more about AMP in this PyTorch blog.
    • bf16 (bfloat 16): This uses bf16 for all weights. Some large models on our platform uses full bf16 training for better memory usage and training speed.
  • Long-context fine-tuning of Llama 3.1 (8B) Reference, Llama 3.1 (8B) Reference, Llama 3.1 (70B) Reference, Llama 3.1 Instruct (70B) Reference for context sizes of 32K-131K is only supported using the LoRA method.
  • For Llama 3.1 (405B) Fine-tuning, please contact us.

LoRA Fine-tuning

OrganizationModel NameModel String for APIContext LengthMax Batch SizeMin Batch SizeTraining Precision Type*
MetaLlama 3.3 Instruct (70B) Referencemeta-llama/Llama-3.3-70B-Instruct-Reference819288AMP
MetaLlama 3.1 (8B) Referencemeta-llama/Meta-Llama-3.1-8B-Reference8192328AMP
MetaLlama 3.1 Instruct (8B) Referencemeta-llama/Meta-Llama-3.1-8B-Instruct-Reference8192328AMP
MetaLlama 3.1 (70B) Referencemeta-llama/Meta-Llama-3.1-70B-Reference819288AMP
MetaLlama 3.1 Instruct (70B) Referencemeta-llama/Meta-Llama-3.1-70B-Instruct-Reference819288AMP
MetaLlama 3 (8B)meta-llama/Meta-Llama-3-8B8192328AMP
MetaLlama 3 Instruct (8B)meta-llama/Meta-Llama-3-8B-Instruct8192328AMP
MetaLlama 3 (70B)meta-llama/Meta-Llama-3-70B819288AMP
MetaLlama 3 Instruct (70B)meta-llama/Meta-Llama-3-70B-Instruct819288AMP
MetaLlama-2 (7B)togethercomputer/llama-2-7b40961288AMP
MetaLlama-2 Chat (7B)togethercomputer/llama-2-7b-chat40961288AMP
MetaLlama-2 (13B)togethercomputer/llama-2-13b4096968AMP
MetaLlama-2 Chat (13B)togethercomputer/llama-2-13b-chat4096968AMP
MetaLlama-2 (70B)togethercomputer/llama-2-70b4096488AMP
MetaLlama-2 Chat (70B)togethercomputer/llama-2-70b-chat4096488AMP
MetaCodeLlama (7B)codellama/CodeLlama-7b-hf16384328AMP
MetaCodeLlama Python (7B)codellama/CodeLlama-7b-Python-hf16384328AMP
MetaCodeLlama Instruct (7B)codellama/CodeLlama-7b-Instruct-hf16384328AMP
MetaCodeLlama (13B)codellama/CodeLlama-13b-hf16384328AMP
MetaCodeLlama Python (13B)codellama/CodeLlama-13b-Python-hf16384328AMP
MetaCodeLlama Instruct (13B)codellama/CodeLlama-13b-Instruct-hf16384328AMP
Mistral AIMixtral-8x7B (46.7B)mistralai/Mixtral-8x7B-v0.132768168AMP
Mistral AIMixtral-8x7B Instruct (46.7B)mistralai/Mixtral-8x7B-Instruct-v0.132768168AMP
NousResearchNous Hermes 2 - Mixtral 8x7B-DPO (46.7B)NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO32768168AMP
NousResearchNous Hermes 2 - Mixtral 8x7B-SFT (46.7B)NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT32768168AMP
Mistral AIMistral 7B Instruct v0.2mistralai/Mistral-7B-Instruct-v0.232768168AMP
Mistral AIMistral 7B Instruct v0.1mistralai/Mistral-7B-Instruct-v0.18192648AMP
Mistral AIMistral 7B v0.1mistralai/Mistral-7B-v0.18192648AMP
QwenQwen2-1.5BQwen/Qwen2-1.5B8192328AMP
QwenQwen2-1.5B-InstructQwen/Qwen2-1.5B-Instruct8192328AMP
QwenQwen2-7BQwen/Qwen2-7B8192328AMP
QwenQwen2-7B-InstructQwen/Qwen2-7B-Instruct8192328AMP
QwenQwen2-72BQwen/Qwen2-72B819288AMP
QwenQwen2-72B-InstructQwen/Qwen2-72B-Instruct819288AMP
TekniumOpenHermes 2.5 Mistral 7Bteknium/OpenHermes-2p5-Mistral-7B8192648AMP
Hugging Face H4Zephyr 7B ßHuggingFaceH4/zephyr-7b-beta8192968AMP
UpstageSOLAR Instruct v1 (11B)upstage/SOLAR-10.7B-Instruct-v1.04096328AMP

LoRA Long-context Fine-tuning

OrganizationModel NameModel String for APIContext LengthMax Batch SizeMin Batch SizeTraining Precision Type*
MetaLlama 3.3 Instruct (70B) Referencemeta-llama/Llama-3.3-70B-32k-Instruct-Reference327681*1*AMP
MetaLlama 3.1 (8B) Referencemeta-llama/Meta-Llama-3.1-8B-32k-Reference3276888AMP
MetaLlama 3.1 Instruct (8B) Referencemeta-llama/Meta-Llama-3.1-8B-32k-Instruct-Reference3276888AMP
MetaLlama 3.1 (70B) Referencemeta-llama/Meta-Llama-3.1-70B-32k-Reference327681*1*AMP
MetaLlama 3.1 Instruct (70B) Referencemeta-llama/Meta-Llama-3.1-70B-32k-Instruct-Reference327681*1*AMP
MetaLlama 3 (8B)togethercomputer/Llama-3-8b-32k32768168AMP
MetaLlama-2-7B-32K (7B)togethercomputer/LLaMA-2-7B-32K32768168AMP
MetaLlama-2-7B-32K-Instruct (7B)togethercomputer/LLaMA-2-7B-32K-Instruct32768168AMP

1* -- Gradient accumulation 8 is used, so effectively you will get batch size 8 (iteration time is slower).

Full Fine-tuning

OrganizationModel NameModel String for APIContext LengthMax Batch SizeMin Batch SizeTraining Precision Type*
MetaLlama 3.3 Instruct (70B) Referencemeta-llama/Llama-3.3-70B-Instruct-Reference81921616bf16
MetaLlama 3.1 (8B) Referencemeta-llama/Meta-Llama-3.1-8B-Reference8192248AMP
MetaLlama 3.1 Instruct (8B) Referencemeta-llama/Meta-Llama-3.1-8B-Instruct-Reference8192248AMP
MetaLlama 3.1 (70B) Referencemeta-llama/Meta-Llama-3.1-70B-Reference81921616bf16
MetaLlama 3.1 Instruct (70B) Referencemeta-llama/Meta-Llama-3.1-70B-Instruct-Reference81921616bf16
MetaLlama 3 (8B)meta-llama/Meta-Llama-3-8B8192248AMP
MetaLlama 3 Instruct (8B)meta-llama/Meta-Llama-3-8B-Instruct8192248AMP
MetaLlama 3 (70B)meta-llama/Meta-Llama-3-70B81921616bf16
MetaLlama 3 Instruct (70B)meta-llama/Meta-Llama-3-70B-Instruct81921616bf16
TogetherLlama-2-7B-32K (7B)togethercomputer/LLaMA-2-7B-32K32768168AMP
TogetherLlama-2-7B-32K-Instruct (7B)togethercomputer/Llama-2-7B-32K-Instruct32768168AMP
MetaLlama-2 (7B)togethercomputer/llama-2-7b4096968AMP
MetaLlama-2 Chat (7B)togethercomputer/llama-2-7b-chat4096968AMP
MetaLlama-2 (13B)togethercomputer/llama-2-13b4096408AMP
MetaLlama-2 Chat (13B)togethercomputer/llama-2-13b-chat4096408AMP
MetaLlama-2 (70B)togethercomputer/llama-2-70b40966416bf16
MetaLlama-2 Chat (70B)togethercomputer/llama-2-70b-chat40966416bf16
MetaCodeLlama (7B)codellama/CodeLlama-7b-hf16384328AMP
MetaCodeLlama Python (7B)codellama/CodeLlama-7b-Python-hf16384328AMP
MetaCodeLlama Instruct (7B)codellama/CodeLlama-7b-Instruct-hf16384328AMP
MetaCodeLlama (13B)codellama/CodeLlama-13b-hf16384168AMP
MetaCodeLlama Python (13B)codellama/CodeLlama-13b-Python-hf16384168AMP
MetaCodeLlama Instruct (13B)codellama/CodeLlama-13b-Instruct-hf16384168AMP
Mistral AIMixtral-8x7B (46.7B)mistralai/Mixtral-8x7B-v0.1327681616bf16
Mistral AIMixtral-8x7B Instruct (46.7B)mistralai/Mixtral-8x7B-Instruct-v0.1327681616bf16
NousResearchNous Hermes 2 - Mixtral 8x7B-DPO (46.7B)NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO327681616bf16
NousResearchNous Hermes 2 - Mixtral 8x7B-SFT (46.7B)NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT327681616bf16
Mistral AIMistral 7B Instruct v0.2mistralai/Mistral-7B-Instruct-v0.232768168AMP
Mistral AIMistral 7B Instruct v0.1mistralai/Mistral-7B-Instruct-v0.18192648AMP
Mistral AIMistral 7B v0.1mistralai/Mistral-7B-v0.18192648AMP
QwenQwen2-1.5BQwen/Qwen2-1.5B8192328AMP
QwenQwen2-1.5B-InstructQwen/Qwen2-1.5B-Instruct8192328AMP
QwenQwen2-7BQwen/Qwen2-7B8192248AMP
QwenQwen2-7B-InstructQwen/Qwen2-7B-Instruct8192248AMP
TekniumOpenHermes 2.5 Mistral 7Bteknium/OpenHermes-2p5-Mistral-7B8192648AMP
Hugging Face H4Zephyr 7B ßHuggingFaceH4/zephyr-7b-beta8192648AMP
UpstageSOLAR Instruct v1 (11B)upstage/SOLAR-10.7B-Instruct-v1.04096328AMP


Request a model