Examples

Use these examples to learn best practices for using the inference API.

Image generation

runwayml/stable-diffusion-v1-5 or any other image model listed in Inference can be used to to generate images.

  1. Make a request using our API
curl https://api.together.xyz/inference \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "User-Agent: YOUR_APP_NAME" \
-d '{"model": "runwayml/stable-diffusion-v1-5", "prompt": "Space robots", "n": 4, "steps": 20 }' \
-o output.json

This request queries the StableDiffusion model to generate images with a prompt of "Space robots". The n parameter specifies how many images the API will return. You should get a response back that resembles the following:

{
  "prompt": ["Space robots"],
  "model": "runwayml/stable-diffusion-v1-5",
  "model_owner": "",
  "tags": {},
  "num_returns": 4,
  "args": {
    "model": "runwayml/stable-diffusion-v1-5n",
    "prompt": "Space robots",
    "n": 4,
    "steps": 20
  },
  "output": {
    "choices": [
      {
        "image_base64": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwc..."
      },
      {
        "image_base64": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwc..."
      },
      {
        "image_base64": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwc..."
      },
      {
        "image_base64": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwc..."
      }
    ],
    "result_type": "image-model-inference"
  },
  "status": "finished",
  "subjobs": []
}

From the above response you can read the output images from the output['choices'][0]['image_base64'].

  1. Now you've generated your first images. You can extract them to JPEG files by running:
cat output.json | jq -r ".output.choices[0].image_base64" | base64 -d > img1.jpg
cat output.json | jq -r ".output.choices[1].image_base64" | base64 -d > img2.jpg
  1. You can also query the API with any HTTP client you like, for example you can do so with Python
import requests

endpoint = 'https://api.together.xyz/inference'

res = requests.post(endpoint, json={
    "model": "runwayml/stable-diffusion-v1-5",
    "prompt": "Space robots",
    "n": 4,
    "steps": 20
}, headers={
    "Authorization": "Bearer <YOUR_API_KEY>",
    "User-Agent": "<YOUR_APP_NAME>"
})

print(res.json())

To query with an input image, try the image_base64 parameter. For best results please send 768x768 JPEG images base64-encoded.

import requests
endpoint = 'https://api.together.xyz/inference'
res = requests.post(endpoint, json={
    "model": "runwayml/stable-diffusion-v1-5",
    "prompt": "Space robots",
    "request_type": "image-model-inference",
    "width": 512,
    "height": 512,
    "steps": 20,
    "n": 4,
    "image_base64": "<IMAGE_BASE64>",
}, headers={ 
    "Authorization": "Bearer <YOUR_API_KEY>",
})
print(res.json())

Image generation with a negative prompt

You can use the Together Inference API to generate images from a text prompt using any supported model. In this tutorial you will learn how to query the Stable Diffusion XL model with both a text prompt and a negative prompt. By the end of this tutorial, you will have a Python script that can query the Together Inference API to generate images along with a negative prompt to tell the model what not to include in the generated image.

  1. Set an environment variable called TOGETHER_API_KEY to your API key in your user settings.
  2. Create a text file named "generate_image.py"
  3. Copy the following text into the file.
import base64
import os

import requests                                                                                                                                                                                                                                      
                                                                                                                                                                                                                                                     
url = "https://api.together.xyz/inference"                                                                                                                                                                                                           
model = "stabilityai/stable-diffusion-xl-base-1.0"                                                                                                                                                                                                                  
prompt = "Tokyo crossing"
negative_prompt = "people"
                                                                                                                                                                                                                                                     
print(f"Model: {model}")                                                                                                                                                                                                                             
print(f"Prompt: {repr(prompt)}")                                                                                                                                                                                                                     
print(f"Negative Prompt: {repr(negative_prompt)}")
                                                                                                                                                                                                                                                     
payload = {                                                                                                                                                                                                                                          
    "model": model,                                                                                                                                                                                                                                  
    "prompt": prompt,                                                                                                                                                                                                                                
    "results": 2,                                                                                                                                                                                                                               
    "width": 1024,
    "height": 1024,
    "steps": 20,
    "seed": 42,
    "negative_prompt": negative_prompt,
}                                                                                                                                                                                                                                                    
headers = {                                                                                                                                                                                                                                          
    "accept": "application/json",                                                                                                                                                                                                                    
    "content-type": "application/json",                                                                                                                                                                                                              
    "Authorization": f"Bearer {os.environ['TOGETHER_API_KEY']}"                                                                                                                                                                                      
}                                                                                                                                                                                                                                                    
                                                                                                                                                                                                                                                     
response = requests.post(url, json=payload, headers=headers, stream=True)                                                                                                                                                                            
response.raise_for_status()

response_json = response.json()
                                                                                                                                                                                                                                                     
# save the first image
image = response_json["output"]["choices"][0]
with open("tokyocrossing.png", "wb") as f:
    f.write(base64.b64decode(image["image_base64"]))

  1. Run the script with:
python3 generate_image.py
  • Replace python3 with the appropriate python interpreter for your platform

The output will be written to a file called tokyocrossing.png in the current working directory.

With Stable Diffusion XL 1.0, you should have just generated an image of a crosswalk in Tokyo without any people in it.

Further reading

Prompt engineering is a relatively new discipline for developing and optimizing prompts to efficiently use language models (LMs) for a wide variety of applications and research topics. The Prompt Engineering Guide is a great introduction to the subject.