🔥 Introducing the Helicone AI Gateway, now on the cloud with passthrough billing. Access 100+ models with 1 API and 0% markup.

How to Migrate from OpenRouter to Helicone AI Gateway

Juliette Chevalier's headshotJuliette Chevalier· September 12, 2025

Table of Contents

Why Switch to Helicone AI Gateway

Let's be honest: OpenRouter is a great tool. It's simple to use, has a great UI, and routes you to any model using the OpenAI API.

So we thought: why not build an open-source alternative, with Helicone's observability built in and charge 0% markup fees?

That's what we did. The Helicone AI Gateway is an OpenRouter alternative - with all the features you love, but with 0% markup fees.

  • 0% markup fees - only pay exactly what providers charge
  • Automatic fallbacks - when one provider is down, route to another instantly
  • Built-in observability - logs, traces, and metrics by default without extra setup
  • Cost optimization - automatically route to the cheapest, most reliable provider for each model, always rate-limit aware
  • Passthrough billing & BYOK support - let us handle auth for you (request access here!) or bring your own keys

Migration Guide: OpenRouter → Helicone AI Gateway

Step 1: Set up your API keys

Set up your provider API keys and get your Helicone API key from the Helicone dashboard.

Step 2: Edit your code

Replace OpenRouter's endpoint with Helicone AI Gateway:

// ❌ Before: OpenRouter
const client = new OpenAI({
  baseURL: "https://openrouter.ai/api/v1",
  apiKey: "sk-or-v1-..." // OpenRouter key
});

// ✅ After: Helicone AI Gateway
const client = new OpenAI({
  baseURL: "https://ai-gateway.helicone.ai",
  apiKey: process.env.HELICONE_API_KEY // Helicone key
});

// Query the model
const response = await client.chat.completions.create({
  model: "claude-3.5-sonnet", // or find 100+ other models in https://helicone.ai/models
  messages: [{ role: "user", content: "Hello!" }]
});

That's it. No other code changes needed.

Step 3: Review your requests in the Helicone dashboard

Go to Helicone and see your requests logged automatically.

Migration Gotchas to Watch For

  • Different model names: Some models use slightly different naming conventions. Helicone has a model registry to help you find the exact model name.
  • Model Providers: Helicone routes you to the cheapest, most reliable provider for each model, which means we don't require you to name the provider when making the request like OpenRouter does (i.e. OpenRouter: anthropic/claude-3.5-sonnet vs Helicone: claude-3.5-sonnet).
  • Provider keys: You can use Helicone's auth (request access here!) or bring your own keys. To BYOK, you can set up your provider keys to the Helicone dashboard.

The Bottom Line

As your AI usage grows, cost becomes more and more important.

The Helicone AI Gateway gives you the control you need to optimize your costs while keeping your infrastructure simple and reliable. With observability built in, you never miss a request or an error again!

Ready to migrate?

Need help? Join our Discord where engineers share about building with AI.

Want to contribute? The Helicone AI Gateway is open-source on GitHub. Help us build the future of AI infrastructure.