How Together AI works
Together AI makes it easy to run leading open-source models using only a few lines of code. The platform provides fast inference, OpenAI-compatible APIs, and access to cutting-edge models like Llama 4, DeepSeek, and more. Built for developers who need reliable, scalable AI infrastructure without the complexity.Recommended Models
We recommend good coding models with high context windows and competitive pricing.For the most updated information, please visit Together AI’s pricing page.
Model | Pricing (1M tokens) | Context Window |
---|---|---|
Llama 4 Maverick recommended | 0.85 | ~128k tokens |
DeepSeek-V3 | $1.25 | ~128k tokens |
Llama 3.1 70B Turbo | $0.88 | ~128k tokens |
Qwen 2.5 72B | $1.20 | ~128k tokens |
Creating API Key
Together AI Account is required to create API Key.
- Create an account at api.together.ai or log in if you have one already
- In the main dashboard, scroll down to the “Manage Account” section
- In the “API Keys” card, click on “Manage Keys” button
- Click on “Add Key” button
- Give it a name like ‘Kodus’ or any descriptive name
- Copy your API key, and you’re ready to go!
New accounts come with $1 credit to get started for free.
How to use
System Requirements
Recommended Hardware
Recommended Hardware
- CPU: 2+ cores - RAM: 8GB+ - Storage: 60GB+ free space
Required Software
Required Software
- Docker (latest stable version) - Node.js (latest LTS version) - Yarn or NPM (latest stable version) - Domain name or fixed IP (for external deployments)
Required Ports
Required Ports
- 3000: Kodus Web App - 3001: Orchestrator API - 5672, 15672: RabbitMQ - 5432: PostgreSQL - 27017: MongoDB
Internet access is only required if you plan to connect with cloud-based Git
services like GitHub, GitLab, or Bitbucket. For self-hosted Git tools within
your network, external internet access is optional.
Domain Name Setup (Optional)
If you're planning to integrate Kodus with cloud-based Git providers (GitHub, GitLab, or Bitbucket), you'll need public-facing URLs for both the Kodus Web App and its API. This allows your server to receive webhooks for proper Code Review functionality and ensures correct application behavior. We recommend setting up two subdomains:- One for the Web Application, e.g.,
kodus-web.yourdomain.com
. - One for the API, e.g.,
kodus-api.yourdomain.com
.
Note: If you're only connecting to self-hosted Git tools on your network and do not require public access or webhooks, you might be able to use a simpler setup, but this guide focuses on public-facing deployments.
Get the Kodus Installer
Clone our installer repository:Configure Environment Variables
First, copy the example environment file:- WEB_NEXTAUTH_SECRET (use openssl rand -base64 32)
- WEB_JWT_SECRET_KEY (use openssl rand -base64 32)
- API_CRYPTO_KEY (use openssl rand -hex 32)
- API_JWT_SECRET (use openssl rand -base64 32)
- API_JWT_REFRESHSECRET (use openssl rand -base64 32)
- CODE_MANAGEMENT_SECRET (use openssl rand -hex 32)
- CODE_MANAGEMENT_WEBHOOK_TOKEN (use openssl rand -base64 32 | tr -d '=' | tr '/+' '_-')
Never commit your
.env
file to version control. Keep your API keys and
database credentials secure..env
file with the following required variables:
Configure Together AI in Environment File
Edit your.env
file and configure the core settings. For LLM Integration, use Together AI in Fixed Mode:
Fixed Mode is ideal for Together AI because it provides OpenAI-compatible APIs with competitive pricing and access to cutting-edge open-source models.
Run the Installation Script
Looking for more control? Check out our docker-compose
file
for manual deployment options.
What the Installer Does
Our installer automates several important steps:- Verifies Docker installation
- Creates networks for Kodus services
- Clones repositories and configures environment files
- Runs docker-compose to start all services
- Executes database migrations
- Seeds initial data
http://localhost:3000
- you should see the Kodus Web Application interface.
Code Review features will not work yet unless you complete the reverse proxy
setup. Without this configuration, external Git providers cannot send webhooks
to your instance.
Set Up Reverse Proxy (For Production)
For webhooks and external access, configure Nginx:Verify Together AI Integration
Além da verificação básica da instalação, confirme que o Together AI está funcionando:For detailed information about SSL setup, monitoring, and advanced configurations, see our complete deployment guide.
Troubleshooting
API Key Issues
API Key Issues
- Verify your API key is correct and active in Together AI Console
- Check if you have sufficient credits in your Together AI account
- Ensure there are no extra spaces in your
.env
file - New accounts receive $1 in free credits
Model Not Found
Model Not Found
- Check if the model name is correctly spelled in your configuration
- Verify the model is available in Together AI’s current model library
- Try with a different model from our recommended list
- Check the Together AI models documentation
Connection Errors
Connection Errors
- Verify your server has internet access to reach
api.together.xyz
- Check if there are any firewall restrictions
- Review the orchestrator logs for detailed error messages
- Ensure you’re using the correct API endpoint
Rate Limiting
Rate Limiting
- Together AI provides generous rate limits (up to 6000 requests/min for LLMs)
- Check your current usage in the Together AI dashboard
- Consider upgrading to a higher tier for increased limits
- Monitor your usage patterns to optimize API calls