How Together AI works
Together AI makes it easy to run leading open-source models using only a few lines of code. The platform provides fast inference, OpenAI-compatible APIs, and access to cutting-edge models like Llama 4, DeepSeek, and more. Built for developers who need reliable, scalable AI infrastructure without the complexity.Recommended Models
We recommend good coding models with high context windows and competitive pricing.For the most updated information, please visit Together AI’s pricing page.
| Model | Pricing (1M tokens) | Context Window |
|---|---|---|
Llama 4 Maverick recommended | 0.85 | ~128k tokens |
| DeepSeek-V3 | $1.25 | ~128k tokens |
| Llama 3.1 70B Turbo | $0.88 | ~128k tokens |
| Qwen 2.5 72B | $1.20 | ~128k tokens |
Creating API Key
Go directly to Together AI Console to create a new API Key. Or, follow these steps:- Create an account at api.together.ai or log in if you have one already
- In the main dashboard, scroll down to the “Manage Account” section
- In the “API Keys” card, click on “Manage Keys” button
- Click on “Add Key” button
- Give it a name like ‘Kodus’ or any descriptive name
- Copy your API key, and you’re ready to go!
New accounts come with $1 credit to get started for free.
How to use
System Requirements
Recommended Hardware
Recommended Hardware
- CPU: 2+ cores - RAM: 8GB+ - Storage: 60GB+ free space
Required Software
Required Software
- Docker (latest stable version)
- Node.js (latest LTS version)
- Yarn or NPM (latest stable version)
- Domain name or fixed IP (for external deployments)
Required Ports
Required Ports
- 3000: Kodus Web App
- 3001: API
- 3332: Webhooks
- 5672, 15672, 15692: RabbitMQ (AMQP, management, metrics)
- 3101: MCP Manager (API, metrics)
- 5432: PostgreSQL - 27017: MongoDB
Internet access is only required if you plan to connect with cloud-based Git
services like GitHub, GitLab, or Bitbucket. For self-hosted Git tools within
your network, external internet access is optional.
Domain Name Setup (Optional)
If you're planning to integrate Kodus with cloud-based Git providers (GitHub, GitLab, or Bitbucket), you'll need public-facing URLs for both the Kodus Web App and its API. This allows your server to receive webhooks for proper Code Review functionality and ensures correct application behavior. We recommend setting up two subdomains:- One for the Web Application, e.g.,
kodus-web.yourdomain.com. - One for the API, e.g.,
kodus-api.yourdomain.com.
Note: If you're only connecting to self-hosted Git tools on your network and do not require public access or webhooks, you might be able to use a simpler setup, but this guide focuses on public-facing deployments.
Setup
1
Clone the installer repository
2
Copy the example environment file
3
Generate secure keys for the required environment variables
4
Edit the environment file
Edit See Environment Variables Configuration for detailed instructions.
.env with your values using your preferred text editor.5
Run the installer
6
Success 🎉
When complete, Kodus Services should be running on your machine.
You can verify your installation using the following script:
7
Access the web interface
Once you access the web interface for the first time, you'll need to:
- Create your admin account - This will be the first user with full system access
- Configure your Git provider - Connect GitHub, GitLab, or Bitbucket following the on-screen instructions
- Select repositories for analysis - Choose which code repositories Kody will review
For detailed steps on the initial configuration process, refer to our Getting
Started Guide.
Configure Together AI in Environment File
Edit your.env file and configure the core settings. For LLM Integration, use Together AI in Fixed Mode:
Fixed Mode is ideal for Together AI because it provides OpenAI-compatible APIs with competitive pricing and access to cutting-edge open-source models.
Run the Installation Script
Set the proper permissions for the installation script:What the Installer Does
Our installer automates several important steps:- Verifies Docker installation
- Creates networks for Kodus services
- Clones repositories and configures environment files
- Runs docker-compose to start all services
- Executes database migrations
- Seeds initial data
http://localhost:3000 - you should see the Kodus Web Application interface.
Set Up Reverse Proxy (For Production)
For webhooks and external access, configure Nginx:Verify Together AI Integration
In addition to the basic installation verification, confirm that Together AI is working:Troubleshooting
API Key Issues
API Key Issues
- Verify your API key is correct and active in Together AI Console
- Check if you have sufficient credits in your Together AI account
- Ensure there are no extra spaces in your
.envfile - New accounts receive $1 in free credits
Model Not Found
Model Not Found
- Check if the model name is correctly spelled in your configuration
- Verify the model is available in Together AI’s current model library
- Try with a different model from our recommended list
- Check the Together AI models documentation
Connection Errors
Connection Errors
- Verify your server has internet access to reach
api.together.xyz - Check if there are any firewall restrictions
- Review the API/worker logs for detailed error messages
- Ensure you’re using the correct API endpoint
Rate Limiting
Rate Limiting
- Together AI provides generous rate limits (up to 6000 requests/min for LLMs)
- Check your current usage in the Together AI dashboard
- Consider upgrading to a higher tier for increased limits
- Monitor your usage patterns to optimize API calls