Skip to main content
POST
/
videos
Together AI SDK (Python)
from together import Together
import os

client = Together(
    api_key=os.environ.get("TOGETHER_API_KEY"),
)

response = client.videos.create(
    model="meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo",
    prompt="A cartoon of an astronaut riding a horse on the moon"
)

print(response.id)
{
  "id": "<string>"
}

Authorizations

Authorization
string
header
default:default
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

application/json

Parameters for creating a new video generation job.

model
string
required

The model to be used for the video creation request.

prompt
string

Text prompt that describes the video to generate.

Required string length: 1 - 32000
height
integer
width
integer
seconds
string

Clip duration in seconds.

fps
integer

Frames per second. Defaults to 24.

steps
integer

The number of denoising steps the model performs during video generation. More steps typically result in higher quality output but require longer processing time.

Required range: 10 <= x <= 50
seed
integer

Seed to use in initializing the video generation. Using the same seed allows deterministic video generation. If not provided a random seed is generated for each request.

guidance_scale
integer

Controls how closely the video generation follows your prompt. Higher values make the model adhere more strictly to your text description, while lower values allow more creative freedom. guidence_scale affects both visual content and temporal consistency.Recommended range is 6.0-10.0 for most video models. Values above 12 may cause over-guidance artifacts or unnatural motion patterns.

output_format
enum<string>

Specifies the format of the output video. Defaults to MP4.

Available options:
MP4,
WEBM
output_quality
integer

Compression quality. Defaults to 20.

negative_prompt
string

Similar to prompt, but specifies what to avoid instead of what to include

frame_images
object[]

Array of images to guide video generation, similar to keyframes.

Example:
[
[
{
"input_image": "aac49721-1964-481a-ae78-8a4e29b91402",
"frame": 0
},
{
"input_image": "c00abf5f-6cdb-4642-a01d-1bfff7bc3cf7",
"frame": 48
},
{
"input_image": "3ad204c3-a9de-4963-8a1a-c3911e3afafe",
"frame": "last"
}
]
]
reference_images
string[]

Unlike frame_images which constrain specific timeline positions, reference images guide the general appearance that should appear consistently across the video.

Response

200 - application/json

Success

id
string
required

Unique identifier for the video job.

I