Integrations
Langchain
LangChain is a framework for developing context-aware, reasoning applications powered by language models.
To install the LangChain x Together library, run:
pip install --upgrade langchain-together
Here's sample code to get you started with Langchain + Together AI:
from langchain_together import ChatTogether
chat = ChatTogether(model="meta-llama/Llama-3-70b-chat-hf")
for m in chat.stream("Tell me fun things to do in NYC"):
print(m.content, end="", flush=True)
Output:
The city that never sleeps! New York City is a hub of entertainment, culture, and adventure, offering countless fun things to do for visitors of all ages and interests. Here are some ideas to get you started:
**Iconic Landmarks and Attractions:**
1. **Statue of Liberty and Ellis Island**: Take a ferry to Liberty Island to see the iconic statue up close and visit the Ellis Island Immigration Museum.
2. **Central Park**: Explore the 843-acre green oasis in the middle of Manhattan, featuring lakes, gardens, and plenty of walking paths.
3. **Empire State Building**: Enjoy panoramic views of the city from the observation deck of this iconic skyscraper.
4. **The Metropolitan Museum of Art**: One of the world's largest and most famous museums, with a collection that spans over 5,000 years of human history.
LlamaIndex
LlamaIndex is a simple, flexible data framework for connecting custom data sources to large language models (LLMs).
Here's sample code to get you started with Llama Index + Together AI:
!pip install llama-index langchain
from llama_index.llms import OpenAILike
llm = OpenAILike(
model="mistralai/Mixtral-8x7B-Instruct-v0.1",
api_base="https://api.together.xyz/v1",
api_key="TOGETHER_API_KEY",
is_chat_model=True,
is_function_calling_model=True,
temperature=0.1,
)
response = llm.complete("Write up to 500 words essay explaining Large Language Models")
print(response)
Pinecone
Pinecone is a vector database that helps companies build RAG applications.
Here's some sample code to get you started with Pinecone + Together AI:
from pinecone import Pinecone, ServerlessSpec
from together import Together
pc = Pinecone(
api_key="PINECONE_API_KEY",
source_tag="TOGETHER_AI"
)
client = Together()
# Create an index in pinecone
index = pc.create_index(
name="serverless-index",
dimension=1536,
metric="cosine",
spec=ServerlessSpec(cloud="aws", region="us-west-2"),
)
# Create an embedding on Together AI
textToEmbed = "Our solar system orbits the Milky Way galaxy at about 515,000 mph"
embeddings = client.embeddings.create(
model="togethercomputer/m2-bert-80M-8k-retrieval",
input=textToEmbed
)
# Use index.upsert() to insert embeddings and index.query() to query for similar vectors
Helicone
Helicone is an open source LLM observability platform.
Here's some sample code to get started with using Helicone + Together AI:
import os
from together import Together
client = Together(
api_key=os.environ.get("TOGETHER_API_KEY"),
base_url="https://together.hconeai.com/v1",
supplied_headers={
"Helicone-Auth": f"Bearer {os.environ.get('HELICONE_API_KEY')}",
},
)
stream = client.chat.completions.create(
model="meta-llama/Llama-3-8b-chat-hf",
messages=[
{"role": "user", "content": "What are some fun things to do in New York?"}
],
stream=True,
)
for chunk in stream:
print(chunk.choices[0].delta.content or "", end="", flush=True)
Updated about 1 month ago