跳转到主要内容

Together AI 的工作原理

Together AI 让您只需几行代码即可轻松运行领先的开源模型。该平台提供快速推理、兼容 OpenAI 的 API,以及对 Llama 4、DeepSeek 等尖端模型的访问。专为需要可靠、可扩展 AI 基础设施而无需复杂性的开发者而构建。

推荐模型

我们推荐具有高上下文窗口和有竞争力定价的优秀编码模型。
如需最新信息,请访问 Together AI 的定价页面
模型定价(100万令牌)上下文窗口
Llama 4 Maverick 推荐0.27/0.27/0.85~128k 令牌
DeepSeek-V3$1.25~128k 令牌
Llama 3.1 70B Turbo$0.88~128k 令牌
Qwen 2.5 72B$1.20~128k 令牌

创建 API 密钥

需要 Together AI 账户才能创建 API 密钥。
直接访问 Together AI 控制台创建新的 API 密钥。 或者,按照以下步骤操作:
  1. api.together.ai 创建账户或登录(如果您已有账户)
  2. 在主控制面板中,向下滚动到”Manage Account”部分
  3. 在”API Keys”卡片中,点击”Manage Keys”按钮
  4. 点击”Add Key”按钮
  5. 为其命名,例如’Kodus’或任何描述性名称
  6. 复制您的 API 密钥,您就可以开始使用了!
新账户附带 $1 信用额度,可免费开始使用。

如何使用

系统要求

  • Docker (latest stable version)
  • Node.js (latest LTS version)
  • Yarn or NPM (latest stable version)
  • Domain name or fixed IP (for external deployments)
  • 3000: Kodus Web App
  • 3001: API
  • 3332: Webhooks
  • 5672, 15672, 15692: RabbitMQ (AMQP, management, metrics)
  • 3101: MCP Manager (API, metrics)
  • 5432: PostgreSQL - 27017: MongoDB
如果您计划连接云端 Git 服务(GitHub、GitLab、Bitbucket)或云端 LLM 提供商(OpenAI、Anthropic 等),则需要互联网访问。 对于内网自托管的 Git 工具和本地/自托管 LLM,外网访问是可选的。

域名设置(可选)

If you're planning to integrate Kodus with cloud-based Git providers (GitHub, GitLab, or Bitbucket), you'll need public-facing URLs for both the Kodus Web App and its API. This allows your server to receive webhooks for proper Code Review functionality and ensures correct application behavior. We recommend setting up two subdomains:
  • One for the Web Application, e.g., kodus-web.yourdomain.com.
  • One for the API, e.g., kodus-api.yourdomain.com.
Webhook 由独立服务处理(端口 3332)。您可以选择:
  • 使用独立的 webhook 子域名,例如 kodus-webhooks.yourdomain.com,或
  • 继续使用 API 域名,并在反向代理中将 /github/webhook/gitlab/webhook 等路径转发到 webhook 服务。
Both subdomains should have DNS A records pointing to your server's IP address. Later in this guide, we will configure a reverse proxy (Nginx) to route requests to these subdomains to the correct internal services. This setup is essential for full functionality, including webhooks and authentication.
Note: If you're only connecting to self-hosted Git tools on your network and do not require public access or webhooks, you might be able to use a simpler setup, but this guide focuses on public-facing deployments.

Setup

1

Clone the installer repository

git clone https://github.com/kodustech/kodus-installer.git
cd kodus-installer
2

Copy the example environment file

cp .env.example .env
3

Generate secure keys for the required environment variables

./generate-keys.sh
4

Edit the environment file

Edit .env with your values using your preferred text editor.
nano .env
详细说明请参见 环境变量配置
5

Run the installer

./scripts/install.sh
6

Success 🎉

When complete, Kodus Services should be running on your machine. You can verify your installation using the following script:
./scripts/doctor.sh
7

Access the web interface

Once you access the web interface for the first time, you'll need to:
  1. Create your admin account - This will be the first user with full system access
  2. Configure your Git provider - Connect GitHub, GitLab, or Bitbucket following the on-screen instructions
  3. Select repositories for analysis - Choose which code repositories Kody will review
详细步骤请参见 快速开始指南

在环境文件中配置 Together AI

编辑您的 .env 文件并配置核心设置。对于 LLM 集成,在固定模式下使用 Together AI:
# 核心系统设置(使用您的域名更新)
WEB_HOSTNAME_API="kodus-api.yourdomain.com"
WEB_PORT_API=443
NEXTAUTH_URL="https://kodus-web.yourdomain.com"

# 安全密钥(使用上面的 openssl 命令生成)
WEB_NEXTAUTH_SECRET="your-generated-secret"
WEB_JWT_SECRET_KEY="your-generated-secret"
API_CRYPTO_KEY="your-generated-hex-key"
API_JWT_SECRET="your-generated-secret"
API_JWT_REFRESHSECRET="your-generated-secret"

# 数据库配置
API_PG_DB_PASSWORD="your-secure-db-password"
API_MG_DB_PASSWORD="your-secure-db-password"

# Together AI 配置(固定模式)
API_LLM_PROVIDER_MODEL="meta-llama/Meta-Llama-4-Maverick-Instruct"  # 选择您偏好的模型
API_OPENAI_FORCE_BASE_URL="https://api.together.xyz/v1"             # Together AI API URL
API_OPEN_AI_API_KEY="your-together-api-key"                         # 您的 Together AI API 密钥

# Git 提供商 Webhook(选择您的提供商)
API_GITHUB_CODE_MANAGEMENT_WEBHOOK="https://kodus-api.yourdomain.com/github/webhook"
# 或 API_GITLAB_CODE_MANAGEMENT_WEBHOOK="https://kodus-api.yourdomain.com/gitlab/webhook"
# 或 GLOBAL_BITBUCKET_CODE_MANAGEMENT_WEBHOOK="https://kodus-api.yourdomain.com/bitbucket/webhook"
Webhook URL 必须指向 Webhooks 服务(端口 3332)。您可以使用独立的 webhook 域名,或在反向代理中将 /.../webhook 转发到 3332 端口。
固定模式非常适合 Together AI,因为它提供兼容 OpenAI 的 API,具有有竞争力的定价,并可访问尖端的开源模型。

Run the Installation Script

Looking for more control? Check out our docker-compose file for manual deployment options.
Set the proper permissions for the installation script:
chmod +x scripts/install.sh
Run the script:
./scripts/install.sh

What the Installer Does

Our installer automates several important steps:
  • Verifies Docker installation
  • Creates networks for Kodus services
  • Clones repositories and configures environment files
  • Runs docker-compose to start all services
  • Executes database migrations
  • Seeds initial data
🎉 Success! When complete, the Kodus Web App and backend services (API, worker, webhooks, MCP manager) should be running on your machine. You can verify your installation by visiting http://localhost:3000 - you should see the Kodus Web Application interface.
Code Review features will not work yet unless you complete the reverse proxy setup. Without this configuration, external Git providers cannot send webhooks to your instance.

设置反向代理(用于生产环境)

对于 Webhook 和外部访问,配置 Nginx:
# Web 应用(端口 3000)
server {
    listen 80;
    server_name kodus-web.yourdomain.com;
    location / {
        proxy_pass http://localhost:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

# API(端口 3001)
server {
    listen 80;
    server_name kodus-api.yourdomain.com;
    location ~ ^/(github|gitlab|bitbucket|azure-repos)/webhook {
        proxy_pass http://localhost:3332;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    location / {
        proxy_pass http://localhost:3001;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

验证 Together AI 集成

除了基本的安装验证外,确认 Together AI 正常工作:
# 专门验证 Together AI API 连接
docker-compose logs api worker | grep -i together
有关 SSL 设置、监控和高级配置的详细信息,请参阅我们的完整部署指南

故障排除

  • Together AI 控制台中验证您的 API 密钥是否正确且处于活动状态
  • 检查您的 Together AI 账户中是否有足够的信用额度
  • 确保您的 .env 文件中没有多余的空格
  • 新账户会收到 $1 的免费信用额度
  • 检查配置中的模型名称拼写是否正确
  • 验证该模型在 Together AI 当前的模型库中是否可用
  • 尝试使用我们推荐列表中的其他模型
  • 查看 Together AI 模型文档
  • 验证您的服务器是否有互联网访问权限以访问 api.together.xyz
  • 检查是否有任何防火墙限制
  • 查看 API/worker 日志以获取详细的错误消息
  • 确保您使用的是正确的 API 端点
  • Together AI 提供慷慨的速率限制(LLM 最高可达每分钟 6000 个请求)
  • 在 Together AI 控制面板中检查您当前的使用情况
  • 考虑升级到更高层级以获得更高的限制
  • 监控您的使用模式以优化 API 调用