Deprecations
Overview
We regularly update our platform with the latest and most powerful open-source models. This document outlines our deprecation policy and provides information on migrating from deprecated models to newer or alternate versions.
Deprecation Policy
Model Type | Deprecation Notice | Notes |
---|---|---|
Preview Model | 3 days of notice, after 30 days | Clearly marked in docs with βPreviewβ tag in docs |
Serverless Endpoint | 2 or 3 weeks* | |
On Demand Dedicated Endpoint | 2 or 3 weeks* |
*Depends on usage and whether thereβs an available newer version of the model.
- Users of models scheduled for deprecation will be notified by email.
- All changes will be reflected on this page.
- Each deprecated model will have a specified removal date.
- After the removal date, the model will no longer be queryable via its serverless endpoint but options to migrate will be available as described below.
Migration Options
When a model is deprecated on our serverless platform, users have three options:
-
On-demand Dedicated Endpoint (if supported):
- Reserved solely for the user, users choose underlying hardware.
- Charged on a price per minute basis.
- Endpoints can be dynamically spun up and down.
-
Monthly Reserved Dedicated Endpoint:
- Reserved solely for the user.
- Charged on a month-by-month basis.
- Can be requested via this form.
-
Migrate to a newer serverless model:
- Switch to an updated model on the serverless platform.
Migration Steps
- Review the deprecation table below to find your current model.
- Check if on-demand dedicated endpoints are supported for your model.
- Decide on your preferred migration option.
- If choosing a new serverless model, test your application thoroughly with the new model before fully migrating.
- Update your API calls to use the new model or dedicated endpoint.
Deprecation History
All deprecations are listed below, with the most recent deprecations at the top.
Removal Date | Model | Supported by on-demand dedicated endpoints |
---|---|---|
2024-10-29 | Qwen/Qwen1.5-72B-Chat | No |
2024-10-29 | Qwen/Qwen1.5-110B-Chat | No |
2024-10-07 | NousResearch/Nous-Hermes-2-Yi-34B | No |
2024-10-07 | NousResearch/Hermes-3-Llama-3.1-405B-Turbo | No |
2024-08-22 | NousResearch/Nous-Hermes-2-Mistral-7B-DPO | Yes |
2024-08-22 | SG161222/Realistic_Vision_V3.0_VAE | No |
2024-08-22 | meta-llama/Llama-2-70b-chat-hf | No |
2024-08-22 | mistralai/Mixtral-8x22B | No |
2024-08-22 | Phind/Phind-CodeLlama-34B-v2 | No |
2024-08-22 | meta-llama/Meta-Llama-3-70B | Yes |
2024-08-22 | teknium/OpenHermes-2p5-Mistral-7B | Yes |
2024-08-22 | openchat/openchat-3.5-1210 | Yes |
2024-08-22 | WizardLM/WizardCoder-Python-34B-V1.0 | No |
2024-08-22 | NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT | Yes |
2024-08-22 | NousResearch/Nous-Hermes-Llama2-13b | Yes |
2024-08-22 | zero-one-ai/Yi-34B-Chat | No |
2024-08-22 | codellama/CodeLlama-34b-Instruct-hf | No |
2024-08-22 | codellama/CodeLlama-34b-Python-hf | No |
2024-08-22 | teknium/OpenHermes-2-Mistral-7B | Yes |
2024-08-22 | Qwen/Qwen1.5-14B-Chat | Yes |
2024-08-22 | stabilityai/stable-diffusion-2-1 | No |
2024-08-22 | meta-llama/Llama-3-8b-hf | Yes |
2024-08-22 | prompthero/openjourney | No |
2024-08-22 | runwayml/stable-diffusion-v1-5 | No |
2024-08-22 | wavymulder/Analog-Diffusion | No |
2024-08-22 | Snowflake/snowflake-arctic-instruct | No |
2024-08-22 | deepseek-ai/deepseek-coder-33b-instruct | No |
2024-08-22 | Qwen/Qwen1.5-7B-Chat | Yes |
2024-08-22 | Qwen/Qwen1.5-32B-Chat | No |
2024-08-22 | cognitivecomputations/dolphin-2.5-mixtral-8x7b | No |
2024-08-22 | garage-bAInd/Platypus2-70B-instruct | No |
2024-08-22 | google/gemma-7b-it | Yes |
2024-08-22 | meta-llama/Llama-2-7b-chat-hf | Yes |
2024-08-22 | Qwen/Qwen1.5-32B | No |
2024-08-22 | Open-Orca/Mistral-7B-OpenOrca | Yes |
2024-08-22 | codellama/CodeLlama-13b-Instruct-hf | Yes |
2024-08-22 | NousResearch/Nous-Capybara-7B-V1p9 | Yes |
2024-08-22 | lmsys/vicuna-13b-v1.5 | Yes |
2024-08-22 | Undi95/ReMM-SLERP-L2-13B | Yes |
2024-08-22 | Undi95/Toppy-M-7B | Yes |
2024-08-22 | meta-llama/Llama-2-13b-hf | No |
2024-08-22 | codellama/CodeLlama-70b-Instruct-hf | No |
2024-08-22 | snorkelai/Snorkel-Mistral-PairRM-DPO | Yes |
2024-08-22 | togethercomputer/LLaMA-2-7B-32K-Instruct | Yes |
2024-08-22 | Austism/chronos-hermes-13b | Yes |
2024-08-22 | Qwen/Qwen1.5-72B | No |
2024-08-22 | zero-one-ai/Yi-34B | No |
2024-08-22 | codellama/CodeLlama-7b-Instruct-hf | Yes |
2024-08-22 | togethercomputer/evo-1-131k-base | No |
2024-08-22 | codellama/CodeLlama-70b-hf | No |
2024-08-22 | WizardLM/WizardLM-13B-V1.2 | Yes |
2024-08-22 | meta-llama/Llama-2-7b-hf | No |
2024-08-22 | google/gemma-7b | Yes |
2024-08-22 | Qwen/Qwen1.5-1.8B-Chat | Yes |
2024-08-22 | Qwen/Qwen1.5-4B-Chat | Yes |
2024-08-22 | lmsys/vicuna-7b-v1.5 | Yes |
2024-08-22 | zero-one-ai/Yi-6B | Yes |
2024-08-22 | Nexusflow/NexusRaven-V2-13B | Yes |
2024-08-22 | google/gemma-2b | Yes |
2024-08-22 | Qwen/Qwen1.5-7B | Yes |
2024-08-22 | NousResearch/Nous-Hermes-llama-2-7b | Yes |
2024-08-22 | togethercomputer/alpaca-7b | Yes |
2024-08-22 | Qwen/Qwen1.5-14B | Yes |
2024-08-22 | codellama/CodeLlama-70b-Python-hf | No |
2024-08-22 | Qwen/Qwen1.5-4B | Yes |
2024-08-22 | togethercomputer/StripedHyena-Hessian-7B | No |
2024-08-22 | allenai/OLMo-7B-Instruct | No |
2024-08-22 | togethercomputer/RedPajama-INCITE-7B-Instruct | No |
2024-08-22 | togethercomputer/LLaMA-2-7B-32K | Yes |
2024-08-22 | togethercomputer/RedPajama-INCITE-7B-Base | No |
2024-08-22 | Qwen/Qwen1.5-0.5B-Chat | Yes |
2024-08-22 | microsoft/phi-2 | Yes |
2024-08-22 | Qwen/Qwen1.5-0.5B | Yes |
2024-08-22 | togethercomputer/RedPajama-INCITE-7B-Chat | No |
2024-08-22 | togethercomputer/RedPajama-INCITE-Chat-3B-v1 | No |
2024-08-22 | togethercomputer/GPT-JT-Moderation-6B | No |
2024-08-22 | Qwen/Qwen1.5-1.8B | Yes |
2024-08-22 | togethercomputer/RedPajama-INCITE-Instruct-3B-v1 | No |
2024-08-22 | togethercomputer/RedPajama-INCITE-Base-3B-v1 | No |
2024-08-22 | WhereIsAI/UAE-Large-V1 | No |
2024-08-22 | allenai/OLMo-7B | No |
2024-08-22 | togethercomputer/evo-1-8k-base | No |
2024-08-22 | WizardLM/WizardCoder-15B-V1.0 | No |
2024-08-22 | codellama/CodeLlama-13b-Python-hf | Yes |
2024-08-22 | allenai-olmo-7b-twin-2t | No |
2024-08-22 | sentence-transformers/msmarco-bert-base-dot-v5 | No |
2024-08-22 | codellama/CodeLlama-7b-Python-hf | Yes |
2024-08-22 | hazyresearch/M2-BERT-2k-Retrieval-Encoder-V1 | No |
2024-08-22 | bert-base-uncased | No |
2024-08-22 | mistralai/Mistral-7B-Instruct-v0.1-json | No |
2024-08-22 | mistralai/Mistral-7B-Instruct-v0.1-tools | No |
2024-08-22 | togethercomputer-codellama-34b-instruct-json | No |
2024-08-22 | togethercomputer-codellama-34b-instruct-tools | No |
Notes on model support:
- Models marked "Yes" in the on-demand dedicated endpoint support column can be spun up as dedicated endpoints with customizable hardware.
- Models marked "No" are not available as on-demand endpoints and will require migration to a different model or a monthly reserved dedicated endpoint.
Recommended Actions
- Regularly check this page for updates on model deprecations.
- Plan your migration well in advance of the removal date to ensure a smooth transition.
- If you have any questions or need assistance with migration, please contact our support team.
For the most up-to-date information on model availability, support, and recommended alternatives, please check our API documentation or contact our support team.
Updated about 1 month ago