Skip to main content
POST
/
fine-tunes
/
{id}
/
cancel
# Docs for v1 can be found by changing the above selector ^
from together import Together
import os

client = Together(
    api_key=os.environ.get("TOGETHER_API_KEY"),
)

response = client.fine_tuning.cancel(id="ft-id")

print(response)
{
  "id": "ft-01234567890123456789",
  "status": "completed",
  "created_at": "2023-05-17T17:35:45.123Z",
  "updated_at": "2023-05-17T18:46:23.456Z",
  "user_id": "user_01234567890123456789",
  "owner_address": "[email protected]",
  "total_price": 1500,
  "token_count": 850000,
  "events": [],
  "model": "meta-llama/Llama-2-7b-hf",
  "model_output_name": "mynamespace/meta-llama/Llama-2-7b-hf-32162631",
  "n_epochs": 3,
  "training_file": "file-01234567890123456789",
  "wandb_project_name": "my-finetune-project"
}

Documentation Index

Fetch the complete documentation index at: https://docs.together.ai/llms.txt

Use this file to discover all available pages before exploring further.

Authorizations

Authorization
string
header
default:default
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Path Parameters

id
string
required

Fine-tune ID to cancel. A string that starts with ft-.

Response

Successfully cancelled the fine-tuning job.

A truncated version of the fine-tune response, used for POST /fine-tunes, GET /fine-tunes and POST /fine-tunes/{id}/cancel endpoints

id
string
required

Unique identifier for the fine-tune job

status
enum<string>
required
Available options:
pending,
queued,
running,
compressing,
uploading,
cancel_requested,
cancelled,
error,
completed
created_at
string<date-time>
required

Creation timestamp of the fine-tune job

updated_at
string<date-time>
required

Last update timestamp of the fine-tune job

started_at
string<date-time>

Start timestamp of the current stage of the fine-tune job

user_id
string

Identifier for the user who created the job

owner_address
string

Owner address information

total_price
integer

Total price for the fine-tuning job

token_count
integer

Count of tokens processed

events
object[]

Events related to this fine-tune job

training_file
string

File-ID of the training file

validation_file
string

File-ID of the validation file

packing
boolean

Whether sequence packing is being used for training.

max_seq_length
integer

Maximum sequence length to use for training. If not specified, the maximum allowed for the model and training method will be used.

model
string

Base model used for fine-tuning

model_output_name
string
suffix
string

Suffix added to the fine-tuned model name

n_epochs
integer

Number of training epochs

n_evals
integer

Number of evaluations during training

n_checkpoints
integer

Number of checkpoints saved during training

batch_size
integer

Batch size used for training

training_type
object

Type of training used (full or LoRA)

training_method
object

Method of training used

learning_rate
number<float>

Learning rate used for training

lr_scheduler
object

Learning rate scheduler configuration

warmup_ratio
number<float>

Ratio of warmup steps

max_grad_norm
number<float>

Maximum gradient norm for clipping

weight_decay
number<float>

Weight decay value used

random_seed
integer | null

Random seed used for training. Integer when set; null if not stored (e.g. legacy jobs) or no explicit seed was recorded.

wandb_project_name
string

Weights & Biases project name

wandb_name
string

Weights & Biases run name

from_checkpoint
string

Checkpoint used to continue training

from_hf_model
string

Hugging Face Hub repo to start training from

hf_model_revision
string

The revision of the Hugging Face Hub model to continue training from

progress
object

Progress information for the fine-tuning job