Reference this guide to start fine-tuning a model using the Python library.
together
Python library:
TOGETHER_API_KEY
environment variable:
id
of the file you just uploaded, but if you forget it, you can get the id
’s of all the files you have uploaded using files.list()
. You’ll need these id
’s that start with file-960be810-4d....
in order to start a fine-tuning job.
fine_tuning.create
:
resp
response to highlight some of the useful information about your finetune job.
fine_tuning.retrieve()
method using the job ID provided above. For example, from the sample output above, ft-3b883474-f39c-40d9-9d5a-7f97ba9eeb9f
is your Job ID.
You can also list all the events for a specific fine-tuning job to check the progress or cancel your job with the commands below.
https://wandb.ai/<username>/together?workspace=user-<username>
where <username>
is your unique weights & biases user-name like mama-llama-88
.
Congratulations! You’ve just fine-tuned a model with the Together API. Now it’s time to deploy your model.
together fine-tuning retrieve $JOB_ID
in your CLI.
Q: Is there a minimum price? The minimum price for a fine-tuning job is 366. If you fine-tune this model for 1M tokens for 1 epoch, it is 5.
Q: What happens if I cancel my job? The final price will be determined baed on the amount of tokens used to train your model up to the point of the cancellation. For example, if your fine-tuning job is using Llama-3-8B with a batch size of 8, and you cancelled the job after 1000 training steps, the total number of tokens used for training is 8192 [context length] x 8 [batch size] x 1000 [steps] = 65,536,000. This results in $27.21 as you can check in the pricing page.
tar.zst
file.
ls my-model
.bin
and .json
files to load your model