1. Register for an account
First, register for an account to get an API key. Once you’ve registered, set your account’s API key to an environment variable namedTOGETHER_API_KEY
:
Shell
2. Install your preferred library
Together provides an official library for Python:3. Fine-tuning Dataset
We will use a subset of the CoQA conversational dataset, download the formatted dataset here. This is what one row/sample from the CoQA dataset looks like in conversation format:small_coqa.jsonl
4. Check and Upload Dataset
To upload your data, use the CLI or our Python library:Python
5. Starting a fine-tuning job
We support both LoRA and full fine-tuning – see how to start a finetuning job with either method below.status
key that starts out as “pending”:
Python
6. Monitoring a fine-tuning job’s progress
After you started your job, visit your jobs dashboard. You should see your new job!
retrieve
to get the latest details about your job directly from your code:
Pending
, Queued
, Running
, Uploading
, and Completed
.
7. Using your fine-tuned model
Option 1: LoRA Inference
If you fine-tuned the model using LoRA, as we did above, then the model will instantly be available for use as follows:Option 2: Dedicated Endpoint Deployment
Once your fine-tune job completes, you should see your new model in your models dashboard:
View Deploy
for an hourly usage fee, or download your model checkpoint and run it locally.
For more details read the detailed walkthrough How-to: Fine-tuning .