A blazingly fast AI proxy gateway

Butter sits between your application and AI providers, offering a unified OpenAI-compatible API with multi-provider routing, automatic failover, and sub-50μs overhead. Written in Go with a single dependency.

View on GitHub Quick Start

Features

Everything you need to route AI traffic with confidence.

OpenAI-Compatible API

Drop-in replacement for any OpenAI SDK client. Just change the base URL.

🎯

Multi-Provider Routing

Route models to OpenAI, Anthropic, and OpenRouter with priority or round-robin strategies.

🔃

Streaming & SSE

Full streaming support with immediate per-chunk flush via SSE relay.

🛡️

Automatic Failover

Configurable retry-on status codes with exponential backoff across providers.

🔑

Key Rotation

Weighted random key selection with per-key model allowlists.

🔌

Plugin System

Ordered hook chains for request/response processing with fail-open design. Built-in request logger and rate limiter included.

🚷

Rate Limiting

Built-in token bucket rate limiter with global or per-IP modes. Plugins can short-circuit requests before they reach providers.

Coming Soon

WASM Plugin Sandbox

Sandboxed WASM plugins via Extism for external custom logic in any language.

Coming Soon

Response Caching

In-memory LRU and Redis caching to reduce costs and latency.

Coming Soon

Full Observability

OpenTelemetry tracing and Prometheus metrics for production monitoring.

Quick Start

Up and running in under a minute.

Install

Download the latest binary from GitHub Releases, or build from source:

git clone https://github.com/temikus/butter.git
cd butter
go build -o pkg/bin/butter ./cmd/butter/

Configure

cp config.example.yaml config.yaml
export OPENAI_API_KEY="sk-..."
export OPENROUTER_API_KEY="sk-or-v1-..."

Run

./pkg/bin/butter -config config.yaml
# {"level":"INFO","msg":"butter listening","address":":8080"}

Send a request

curl http://localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openai/gpt-4o-mini",
    "messages": [{"role": "user", "content": "Say hello!"}]
  }'

Drop-in SDK Replacement

Works with any OpenAI-compatible client. Just change the base URL.

Python

from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:8080/v1",
    api_key="unused",  # Butter uses its own configured keys
)

response = client.chat.completions.create(
    model="openai/gpt-4o-mini",
    messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)

Node.js

import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "http://localhost:8080/v1",
  apiKey: "unused",
});

const completion = await client.chat.completions.create({
  model: "openai/gpt-4o-mini",
  messages: [{ role: "user", content: "Hello!" }],
});
console.log(completion.choices[0].message.content);

Architecture

Minimal layers, maximum throughput.

Your App ──▶ Butter ──▶ OpenAI / Anthropic / OpenRouter / ... │ ├── Unified OpenAI-compatible API ├── Automatic failover & retries ├── Weighted key rotation └── Plugin hooks (pre/post HTTP, LLM, streaming) Request Flow: Client → transport.Server (HTTP) → Plugin Chain (pre-hooks) → proxy.Engine (routing/dispatch) → provider.Registry → Provider impl → Plugin Chain (post-hooks) → Response

Performance Targets

Engineered for negligible overhead.

Metric Target
Per-request overhead (no plugins) <50μs
Per-request overhead (built-in plugins) <100μs
Per-request overhead (1 WASM plugin) <150μs
Streaming TTFB overhead <1ms
Memory at idle <30MB