Skip to main content
POST
/
fine-tunes
/
{id}
/
cancel
from together import Together
import os

client = Together(
api_key=os.environ.get("TOGETHER_API_KEY"),
)

response = client.fine_tuning.cancel(id="ft-id")

print(response)
{
  "id": "ft-01234567890123456789",
  "status": "completed",
  "created_at": "2023-05-17T17:35:45.123Z",
  "updated_at": "2023-05-17T18:46:23.456Z",
  "user_id": "user_01234567890123456789",
  "owner_address": "[email protected]",
  "total_price": 1500,
  "token_count": 850000,
  "events": [],
  "model": "meta-llama/Llama-2-7b-hf",
  "model_output_name": "mynamespace/meta-llama/Llama-2-7b-hf-32162631",
  "n_epochs": 3,
  "training_file": "file-01234567890123456789",
  "wandb_project_name": "my-finetune-project"
}

Authorizations

Authorization
string
header
default:default
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Path Parameters

id
string
required

Fine-tune ID to cancel. A string that starts with ft-.

Response

Successfully cancelled the fine-tuning job.

A truncated version of the fine-tune response, used for POST /fine-tunes, GET /fine-tunes and POST /fine-tunes/{id}/cancel endpoints

id
string
required

Unique identifier for the fine-tune job

status
enum<string>
required
Available options:
pending,
queued,
running,
compressing,
uploading,
cancel_requested,
cancelled,
error,
completed
created_at
string<date-time>
required

Creation timestamp of the fine-tune job

updated_at
string<date-time>
required

Last update timestamp of the fine-tune job

user_id
string

Identifier for the user who created the job

owner_address
string

Owner address information

total_price
integer

Total price for the fine-tuning job

token_count
integer

Count of tokens processed

events
object[]

Events related to this fine-tune job

training_file
string

File-ID of the training file

validation_file
string

File-ID of the validation file

model
string

Base model used for fine-tuning

model_output_name
string
suffix
string

Suffix added to the fine-tuned model name

n_epochs
integer

Number of training epochs

n_evals
integer

Number of evaluations during training

n_checkpoints
integer

Number of checkpoints saved during training

batch_size
integer

Batch size used for training

training_type
object

Type of training used (full or LoRA)

  • Option 1
  • Option 2
training_method
object

Method of training used

  • Option 1
  • Option 2
learning_rate
number

Learning rate used for training

lr_scheduler
object

Learning rate scheduler configuration

warmup_ratio
number

Ratio of warmup steps

max_grad_norm
number

Maximum gradient norm for clipping

weight_decay
number

Weight decay value used

wandb_project_name
string

Weights & Biases project name

wandb_name
string

Weights & Biases run name

from_checkpoint
string

Checkpoint used to continue training

from_hf_model
string

Hugging Face Hub repo to start training from

hf_model_revision
string

The revision of the Hugging Face Hub model to continue training from

I