Skip to main content

How Fireworks AI works

Fireworks AI is the fastest inference platform for generative AI, designed to build and run magical AI applications in seconds. The platform provides serverless access to popular open-source models like DeepSeek, Llama, Qwen, and Mistral with optimized speed, high throughput, and minimal latency. Built for developers who need reliable, blazing-fast AI infrastructure without GPU management complexity. We recommend good coding models with competitive pricing and high context windows.
For the most updated information, please visit Fireworks AI’s pricing page.
ModelPricing (1M tokens)Context Window
Llama 4 Maverick recommended0.22/0.22/0.88~131k tokens
Llama 4 Scout0.15/0.15/0.60~131k tokens
DeepSeek V3$0.90~128k tokens
Qwen3 235B0.22/0.22/0.88~131k tokens

Creating API Key

Fireworks AI Account is required to create API Key.
Go directly to Fireworks AI Console to create a new API Key. Or, follow these steps:
  1. Visit app.fireworks.ai and create an account or sign in
  2. Once logged in, navigate to the API Keys page in your account settings
  3. Click “Create API Key” button
  4. Give your key a descriptive name (e.g., ‘Kodus’ or any name you prefer)
  5. Click “Create” to generate the key
  6. Copy the API key immediately and save it somewhere secure - you won’t be able to see it again
New accounts come with $1 in free credits to get started with your projects.

How to use

System Requirements

  • Docker (latest stable version)
  • Node.js (latest LTS version)
  • Yarn or NPM (latest stable version)
  • Domain name or fixed IP (for external deployments)
  • 3000: Kodus Web App
  • 3001: API
  • 3332: Webhooks
  • 5672, 15672, 15692: RabbitMQ (AMQP, management, metrics)
  • 3101: MCP Manager (API, metrics)
  • 5432: PostgreSQL - 27017: MongoDB
Internet access is only required if you plan to connect with cloud-based Git services like GitHub, GitLab, or Bitbucket. For self-hosted Git tools within your network, external internet access is optional.

Domain Name Setup (Optional)

If you're planning to integrate Kodus with cloud-based Git providers (GitHub, GitLab, or Bitbucket), you'll need public-facing URLs for both the Kodus Web App and its API. This allows your server to receive webhooks for proper Code Review functionality and ensures correct application behavior. We recommend setting up two subdomains:
  • One for the Web Application, e.g., kodus-web.yourdomain.com.
  • One for the API, e.g., kodus-api.yourdomain.com.
Both subdomains should have DNS A records pointing to your server's IP address. Later in this guide, we will configure a reverse proxy (Nginx) to route requests to these subdomains to the correct internal services. This setup is essential for full functionality, including webhooks and authentication.
Note: If you're only connecting to self-hosted Git tools on your network and do not require public access or webhooks, you might be able to use a simpler setup, but this guide focuses on public-facing deployments.

Setup

1

Clone the installer repository

git clone https://github.com/kodustech/kodus-installer.git
cd kodus-installer
2

Copy the example environment file

cp .env.example .env
3

Generate secure keys for the required environment variables

./generate-keys.sh
4

Edit the environment file

Edit .env with your values using your preferred text editor.
nano .env
See Environment Variables Configuration for detailed instructions.
5

Run the installer

./scripts/install.sh
6

Success 🎉

When complete, Kodus Services should be running on your machine. You can verify your installation using the following script:
./scripts/doctor.sh
7

Access the web interface

Once you access the web interface for the first time, you'll need to:
  1. Create your admin account - This will be the first user with full system access
  2. Configure your Git provider - Connect GitHub, GitLab, or Bitbucket following the on-screen instructions
  3. Select repositories for analysis - Choose which code repositories Kody will review
For detailed steps on the initial configuration process, refer to our Getting Started Guide.

Configure Fireworks AI in Environment File

Edit your .env file and configure the core settings. For LLM Integration, use Fireworks AI in Fixed Mode:
# Core System Settings (update with your domains)
WEB_HOSTNAME_API="kodus-api.yourdomain.com"    
WEB_PORT_API=443                               
NEXTAUTH_URL="https://kodus-web.yourdomain.com"

# Security Keys (generate with openssl commands above)
WEB_NEXTAUTH_SECRET="your-generated-secret"
WEB_JWT_SECRET_KEY="your-generated-secret"
API_CRYPTO_KEY="your-generated-hex-key"
API_JWT_SECRET="your-generated-secret"
API_JWT_REFRESHSECRET="your-generated-secret"

# Database Configuration
API_PG_DB_PASSWORD="your-secure-db-password"
API_MG_DB_PASSWORD="your-secure-db-password"

# Fireworks AI Configuration (Fixed Mode) 
API_LLM_PROVIDER_MODEL="accounts/fireworks/models/llama4-maverick-instruct-basic"  # Choose your preferred model
API_OPENAI_FORCE_BASE_URL="https://api.fireworks.ai/inference/v1"                  # Fireworks AI API URL  
API_OPEN_AI_API_KEY="your-fireworks-api-key"                                       # Your Fireworks AI API Key

# Git Provider Webhooks (choose your provider)
API_GITHUB_CODE_MANAGEMENT_WEBHOOK="https://kodus-api.yourdomain.com/github/webhook"
# or API_GITLAB_CODE_MANAGEMENT_WEBHOOK="https://kodus-api.yourdomain.com/gitlab/webhook"
# or GLOBAL_BITBUCKET_CODE_MANAGEMENT_WEBHOOK="https://kodus-api.yourdomain.com/bitbucket/webhook"
Fixed Mode is ideal for Fireworks AI because it provides OpenAI-compatible APIs with blazing-fast inference speeds and access to cutting-edge open-source models with zero setup time.

Run the Installation Script

Looking for more control? Check out our docker-compose file for manual deployment options.
Set the proper permissions for the installation script:
chmod +x scripts/install.sh
Run the script:
./scripts/install.sh

What the Installer Does

Our installer automates several important steps:
  • Verifies Docker installation
  • Creates networks for Kodus services
  • Clones repositories and configures environment files
  • Runs docker-compose to start all services
  • Executes database migrations
  • Seeds initial data
🎉 Success! When complete, the Kodus Web App and backend services (API, worker, webhooks, MCP manager) should be running on your machine. You can verify your installation by visiting http://localhost:3000 - you should see the Kodus Web Application interface.
Code Review features will not work yet unless you complete the reverse proxy setup. Without this configuration, external Git providers cannot send webhooks to your instance.

Set Up Reverse Proxy (For Production)

For webhooks and external access, configure Nginx:
# Web App (port 3000)
server {
    listen 80;
    server_name kodus-web.yourdomain.com;
    location / {
        proxy_pass http://localhost:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

# API (port 3001)  
server {
    listen 80;
    server_name kodus-api.yourdomain.com;
    location ~ ^/(github|gitlab|bitbucket|azure-repos)/webhook {
        proxy_pass http://localhost:3332;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    location / {
        proxy_pass http://localhost:3001;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Verify Fireworks AI Integration

In addition to the basic installation verification, confirm that Fireworks AI is working:
# Verify Fireworks AI API connection specifically
docker-compose logs api worker | grep -i fireworks
For detailed information about SSL setup, monitoring, and advanced configurations, see our complete deployment guide.

Troubleshooting

  • Verify your API key is correct and active in Fireworks AI Console
  • Check if you have sufficient credits in your Fireworks AI account
  • Ensure there are no extra spaces in your .env file
  • New accounts receive $1 in free credits to get started
  • Check if the model name is correctly spelled in your configuration
  • Verify the model is available in Fireworks AI’s current model library
  • Try with a different model from our recommended list
  • Check the Fireworks AI models documentation
  • Verify your server has internet access to reach api.fireworks.ai
  • Check if there are any firewall restrictions
  • Review the API/worker logs for detailed error messages
  • Ensure you’re using the correct API endpoint
  • Fireworks AI provides industry-leading speeds with minimal latency
  • Check your network connectivity for optimal performance
  • Consider using dedicated deployments for enterprise workloads
  • Monitor your usage patterns to optimize API calls
  • Fireworks AI provides high rate limits on serverless infrastructure
  • Check your current usage in the Fireworks AI dashboard
  • Consider upgrading to dedicated deployments for higher throughput
  • Contact Fireworks AI support for enterprise rate limit adjustments