Quickstart: Using Hugging Face Inference with Together
This guide will walk you through how to use Together models with Hugging Face Inference.
This documentation provides a concise guide for developers to integrate and use Together AI inference capabilities via the Hugging Face ecosystem.
Authentication and Billing
When using Together AI through Hugging Face, you have two options for authentication:
- Direct Requests: Use your Together AI API key in your Hugging Face user account settings. In this mode, inference requests are sent directly to Together AI, and billing is handled by your Together AI account.
- Routed Requests: If you don't configure a Together AI API key, your requests will be routed through Hugging Face. In this case, you can use a Hugging Face token for authentication. Billing for routed requests is applied to your Hugging Face account at standard provider API rates.You donβt need an account on Together AI to do this, just use your HF one!
To add a Together AI api key to your Hugging Face settings, follow these steps:
- Go to your Hugging Face user account settings.
- Locate the "Inference Providers" section.
- You can add your API keys for different providers, including Together AI
- You can also set your preferred provider order, which will influence the display order in model widgets and code snippets.
You can search for all Together AI models on the hub and directly try out the available models via the Model Page widget too.
Usage Examples
The examples below demonstrate how to interact with various models using Python and JavaScript.
First, ensure you have the huggingface_hub
library installed (version v0.29.0 or later):
pip install huggingface_hub>=0.29.0
npm install @huggingface/inference
1. Text Generation - LLMs
a. Chat Completion with Hugging Face Hub library
from huggingface_hub import InferenceClient
# Initialize the InferenceClient with together as the provider
client = InferenceClient(
provider="together",
api_key="xxxxxxxxxxxxxxxxxxxxxxxx" # Replace with your API key (HF or custom)
)
# Define the chat messages
messages = [
{
"role": "user",
"content": "What is the capital of France?"
}
]
# Generate a chat completion
completion = client.chat.completions.create(
model="deepseek-ai/DeepSeek-R1",
messages=messages,
max_tokens=500
)
# Print the response
print(completion.choices[0].message)
import { HfInference } from "@huggingface/inference";
// Initialize the HfInference client with your API key
const client = new HfInference("xxxxxxxxxxxxxxxxxxxxxxxx");
// Generate a chat completion
const chatCompletion = await client.chatCompletion({
model: "deepseek-ai/DeepSeek-R1", // Replace with your desired model
messages: [
{
role: "user",
content: "What is the capital of France?"
}
],
provider: "together", // Replace with together's provider name
max_tokens: 500
});
// Log the response
console.log(chatCompletion.choices[0].message);
You can swap this for any compatible LLM from Together AI, hereβs a handy URL to find the list.
b. OpenAI client library
You can also call inference providers via the OpenAI python client. You will need to specify the base_url
and model
parameters in the client and call respectively.
The easiest way is to go to a modelβs page on the hub and copy the snippet.
from openai import OpenAI
client = OpenAI(
base_url="https://router.huggingface.co/together",
api_key="hf_xxxxxxxxxxxxxxxxxxxxxxxx" #together or Hugging Face api key
)
messages = [
{
"role": "user",
"content": "What is the capital of France?"
}
]
completion = client.chat.completions.create(
model="deepseek-ai/DeepSeek-R1",
messages=messages,
max_tokens=500,
)
print(completion.choices[0].message)
2. Text-to-Image Generation
from huggingface_hub import InferenceClient
# Initialize the InferenceClient with together as the provider
client = InferenceClient(
provider="together", # Replace with together's provider name
api_key="xxxxxxxxxxxxxxxxxxxxxxxx" # Replace with your API key
)
# Generate an image from text
image = client.text_to_image(
"Bob Marley in the style of a painting by Johannes Vermeer",
model="black-forest-labs/FLUX.1-schnell" # Replace with your desired model
)
# `image` is a PIL.Image object
image.show()
import { HfInference } from "@huggingface/inference";
// Initialize the HfInference client with your API key
const client = new HfInference("xxxxxxxxxxxxxxxxxxxxxxxx");
// Generate a chat completion
const generatedImage = await client.textToImage({
model: "black-forest-labs/FLUX.1-schnell", // Replace with your desired model
inputs: "Bob Marley in the style of a painting by Johannes Vermeer",
provider: "together", // Replace with together's provider name
max_tokens: 500
});
Similar to LLMs, you can use any compatible Text to Image model from the list here.
You can search for all Together AI models on the hub and directly try out the available models via the Model Page widget too.
Weβll continue to increase the number of models and ways to try it out!
Updated about 22 hours ago