TypeScript library
Learn how to use the Together TypeScript library to access all of Together's REST API from server-side TypeScript or JavaScript.
Pre-requisites
- Ensure you have node installed on your machine.
- Create a free account to obtain a Together API Key, found in your account settings.
- Install the library with
npm install together-ai
.
Querying a chat model
To query a chat model, first choose one of Together's open-source chat models. We'll use mistralai/Mixtral-8x7B-Instruct-v0.1
for this example.
Next, pass your query to the messages
key of together.chat.completions.create
:
import Together from "together-ai";
const together = new Together({
apiKey: process.env["TOGETHER_API_KEY"], // This is the default and can be omitted
});
const response = await together.chat.completions.create({
messages: [{ role: "user", content: "Tell me fun things to do in New York" }],
model: "mistralai/Mixtral-8x7B-Instruct-v0.1",
});
console.log(response.choices[0].message.content);
The create()
function returns a promise, which you can await get the result.
Streaming tokens from a chat model
To stream the response back, pass stream: true
into together.chat.completions.create
:
import Together from "together-ai";
const together = new Together({
apiKey: process.env["TOGETHER_API_KEY"], // This is the default and can be omitted
});
const stream = await together.chat.completions.create({
messages: [{ role: "user", content: "Tell me fun things to do in New York" }],
model: "mistralai/Mixtral-8x7B-Instruct-v0.1",
stream: true,
});
for await (const chunk of stream) {
console.log(chunk.choices[0].delta.content);
}
Querying a completion model
To query a chat model, first choose one of Together's open-source code or language models. We'll use codellama/CodeLlama-34b-Python-hf
for this example.
Next, pass your query to the prompt
key of together.completions.create
:
import Together from "together-ai";
const together = new Together({
apiKey: process.env["TOGETHER_API_KEY"], // This is the default and can be omitted
});
const response = await together.completions.create({
model: "codellama/CodeLlama-34b-Python-hf",
prompt: content,
max_tokens: 500,
});
console.log(response.choices[0].text);
Querying an image model
To query a chat model, first choose one of Together's open-source image models. We'll use runwayml/stable-diffusion-v1-5
for this example.
Next, pass your query to the prompt
key of together.images.create
:
import Together from "together-ai";
const together = new Together({
apiKey: process.env["TOGETHER_API_KEY"], // This is the default and can be omitted
});
const response = await together.images.create({
model: "runwayml/stable-diffusion-v1-5",
prompt: content,
steps: 10,
n: 4,
});
console.log(response.data[0].b64_json);
Get an embedding
To get an embedding, first choose one of Together's open-source embedding models. We'll use togethercomputer/m2-bert-80M-2k-retrieval
for this example.
Next, pass your content to the input
key of together.embeddings.create
:
import Together from "together-ai";
const together = new Together({
apiKey: process.env["TOGETHER_API_KEY"], // This is the default and can be omitted
});
const response = await together.embeddings.create({
model: "togethercomputer/m2-bert-80M-2k-retrieval",
input: content,
});
console.log(response.data[0].embedding);
Updated 9 months ago