Skip to main content
POST
/
videos
# Docs for v2 can be found by changing the above selector ^
from together import Together
import os

client = Together(
    api_key=os.environ.get("TOGETHER_API_KEY"),
)

response = client.videos.create(
    model="together/video-model",
    prompt="A cartoon of an astronaut riding a horse on the moon"
)

print(response.id)
{
  "id": "<string>",
  "model": "<string>",
  "status": "in_progress",
  "created_at": 123,
  "size": "<string>",
  "seconds": "<string>",
  "object": "video",
  "completed_at": 123,
  "error": {
    "message": "<string>",
    "code": "<string>"
  },
  "outputs": {
    "cost": 123,
    "video_url": "<string>"
  }
}

Authorizations

Authorization
string
header
default:default
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

application/json

Parameters for creating a new video generation job.

model
string
required

The model to be used for the video creation request.

prompt
string

Text prompt that describes the video to generate.

Required string length: 1 - 32000
height
integer
width
integer
seconds
string

Clip duration in seconds.

fps
integer

Frames per second. Defaults to 24.

steps
integer

The number of denoising steps the model performs during video generation. More steps typically result in higher quality output but require longer processing time.

Required range: 10 <= x <= 50
seed
integer

Seed to use in initializing the video generation. Using the same seed allows deterministic video generation. If not provided a random seed is generated for each request.

guidance_scale
integer

Controls how closely the video generation follows your prompt. Higher values make the model adhere more strictly to your text description, while lower values allow more creative freedom. guidence_scale affects both visual content and temporal consistency.Recommended range is 6.0-10.0 for most video models. Values above 12 may cause over-guidance artifacts or unnatural motion patterns.

output_format
enum<string>

Specifies the format of the output video. Defaults to MP4.

Available options:
MP4,
WEBM
output_quality
integer

Compression quality. Defaults to 20.

negative_prompt
string

Similar to prompt, but specifies what to avoid instead of what to include

frame_images
object[]

Array of images to guide video generation, similar to keyframes.

Example:
[
  [
    {
      "input_image": "aac49721-1964-481a-ae78-8a4e29b91402",
      "frame": 0
    },
    {
      "input_image": "c00abf5f-6cdb-4642-a01d-1bfff7bc3cf7",
      "frame": 48
    },
    {
      "input_image": "3ad204c3-a9de-4963-8a1a-c3911e3afafe",
      "frame": "last"
    }
  ]
]
reference_images
string[]

Unlike frame_images which constrain specific timeline positions, reference images guide the general appearance that should appear consistently across the video.

Response

200 - application/json

Success

Structured information describing a generated video job.

id
string
required

Unique identifier for the video job.

model
string
required

The video generation model that produced the job.

status
enum<string>
required

Current lifecycle status of the video job.

Available options:
in_progress,
completed,
failed
created_at
number
required

Unix timestamp (seconds) for when the job was created.

size
string
required

The resolution of the generated video.

seconds
string
required

Duration of the generated clip in seconds.

object
enum<string>

The object type, which is always video.

Available options:
video
completed_at
number

Unix timestamp (seconds) for when the job completed, if finished.

error
object

Error payload that explains why generation failed, if applicable.

outputs
object

Available upon completion, the outputs provides the cost charged and the hosted url to access the video