Together AI offers day 1 support for the new Llama 4 multilingual vision models that can analyze multiple images and respond to queries about them. Register for a Together AI account to get an API key. New accounts come with free credits to start. Install the Together AI library for your preferred language.Documentation Index
Fetch the complete documentation index at: https://docs.together.ai/llms.txt
Use this file to discover all available pages before exploring further.
How to use Llama 4 Models
Output
Llama4 Notebook
If you’d like to see common use-cases in code see our notebook here .Llama 4 Model Details
Llama 4 Maverick
- Model String: meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8
-
Specs:
- 17B active parameters (400B total)
- 128-expert MoE architecture
- 524,288 context length (will be increased to 1M)
- Support for 12 languages: Arabic, English, French, German, Hindi, Indonesian, Italian, Portuguese, Spanish, Tagalog, Thai, and Vietnamese
- Multimodal capabilities (text + images)
- Support Function Calling
- Best for: Enterprise applications, multilingual support, advanced document intelligence
- Knowledge Cutoff: August 2024
Llama 4 Scout (Deprecated)
Function Calling
Output
Query models with multiple images
Currently this model supports 5 images as input.Output
Llama 4 Use-cases
Llama 4 Maverick:
- Instruction following and Long context ICL: Very consistent in following precise instructions with in-context learning across very long contexts
- Multilingual customer support: Process support tickets with screenshots in 12 languages to quickly diagnose technical issues
- Multimodal capabilities: Particularly strong at OCR and chart/graph interpretation
- Agent/tool calling work: Designed for agentic workflows with consistent tool calling capabilities