# Docs for v2 can be found by changing the above selector ^
from together import Together
import os
client = Together(
api_key=os.environ.get("TOGETHER_API_KEY"),
)
response = client.fine_tuning.cancel(id="ft-id")
print(response){
"id": "ft-01234567890123456789",
"status": "completed",
"created_at": "2023-05-17T17:35:45.123Z",
"updated_at": "2023-05-17T18:46:23.456Z",
"user_id": "user_01234567890123456789",
"owner_address": "[email protected]",
"total_price": 1500,
"token_count": 850000,
"events": [],
"model": "meta-llama/Llama-2-7b-hf",
"model_output_name": "mynamespace/meta-llama/Llama-2-7b-hf-32162631",
"n_epochs": 3,
"training_file": "file-01234567890123456789",
"wandb_project_name": "my-finetune-project"
}Cancel a currently running fine-tuning job. Returns a FinetuneResponseTruncated object.
# Docs for v2 can be found by changing the above selector ^
from together import Together
import os
client = Together(
api_key=os.environ.get("TOGETHER_API_KEY"),
)
response = client.fine_tuning.cancel(id="ft-id")
print(response){
"id": "ft-01234567890123456789",
"status": "completed",
"created_at": "2023-05-17T17:35:45.123Z",
"updated_at": "2023-05-17T18:46:23.456Z",
"user_id": "user_01234567890123456789",
"owner_address": "[email protected]",
"total_price": 1500,
"token_count": 850000,
"events": [],
"model": "meta-llama/Llama-2-7b-hf",
"model_output_name": "mynamespace/meta-llama/Llama-2-7b-hf-32162631",
"n_epochs": 3,
"training_file": "file-01234567890123456789",
"wandb_project_name": "my-finetune-project"
}Bearer authentication header of the form Bearer <token>, where <token> is your auth token.
Fine-tune ID to cancel. A string that starts with ft-.
Successfully cancelled the fine-tuning job.
A truncated version of the fine-tune response, used for POST /fine-tunes, GET /fine-tunes and POST /fine-tunes/{id}/cancel endpoints
Unique identifier for the fine-tune job
pending, queued, running, compressing, uploading, cancel_requested, cancelled, error, completed Creation timestamp of the fine-tune job
Last update timestamp of the fine-tune job
Identifier for the user who created the job
Owner address information
Total price for the fine-tuning job
Count of tokens processed
Events related to this fine-tune job
Show child attributes
fine-tune-event job_pending, job_start, job_stopped, model_downloading, model_download_complete, training_data_downloading, training_data_download_complete, validation_data_downloading, validation_data_download_complete, wandb_init, training_start, checkpoint_save, billing_limit, epoch_complete, training_complete, model_compressing, model_compression_complete, model_uploading, model_upload_complete, job_complete, job_error, cancel_requested, job_restarted, refund, warning , info, warning, error, legacy_info, legacy_iwarning, legacy_ierror File-ID of the training file
File-ID of the validation file
Base model used for fine-tuning
Suffix added to the fine-tuned model name
Number of training epochs
Number of evaluations during training
Number of checkpoints saved during training
Batch size used for training
Learning rate used for training
Learning rate scheduler configuration
Show child attributes
Ratio of warmup steps
Maximum gradient norm for clipping
Weight decay value used
Weights & Biases project name
Weights & Biases run name
Checkpoint used to continue training
Hugging Face Hub repo to start training from
The revision of the Hugging Face Hub model to continue training from
Was this page helpful?