Reverse Proxy
Overview
The LockLLM reverse proxy allows you to add security to existing AI applications with minimal code changes. Route requests through LockLLM by changing your SDK base URL, and all prompts are automatically scanned for injection, jailbreaks, and unsafe instructions.
Works with OpenAI, Anthropic, Google, Cohere, and other major LLM providers. The proxy is compatible with existing SDKs and requires no changes to your request format or application logic.
How it works
Instead of sending requests directly to OpenAI or Anthropic, route them through the LockLLM proxy. The proxy scans prompts for security threats, then forwards safe requests to the original provider. Unsafe requests are blocked or flagged based on your configuration.
The proxy preserves the original API interface, so your existing code continues to work without modification. Response format, error handling, and streaming all work exactly as before.
Quick Start
For OpenAI SDK:
import OpenAI from 'openai'
const client = new OpenAI({
baseURL: 'https://api.lockllm.com/v1/proxy/openai',
apiKey: 'YOUR_OPENAI_KEY',
defaultHeaders: {
'X-LockLLM-Key': 'YOUR_LOCKLLM_KEY'
}
})
// Use normally - all requests are automatically scanned
const response = await client.chat.completions.create({
model: 'gpt-5.2',
messages: [{ role: 'user', content: 'Hello!' }]
})For Anthropic SDK:
import Anthropic from '@anthropic-ai/sdk'
const client = new Anthropic({
baseURL: 'https://api.lockllm.com/v1/proxy/anthropic',
apiKey: 'YOUR_ANTHROPIC_KEY',
defaultHeaders: {
'X-LockLLM-Key': 'YOUR_LOCKLLM_KEY'
}
})
// Use normally - all requests are automatically scanned
const response = await client.messages.create({
model: 'claude-opus-4-5',
max_tokens: 1024,
messages: [{ role: 'user', content: 'Hello!' }]
})Features
- Drop-in replacement - Works with existing SDKs and code
- Multi-provider support - OpenAI, Anthropic, Google, Cohere, and more
- Automatic scanning - All requests are scanned without additional code
- Configurable blocking - Choose to block, flag, or allow risky prompts
- Streaming support - Works with streaming responses and async requests
- Error preservation - Original provider errors are passed through unchanged
Supported Providers
The reverse proxy supports 20+ LLM providers with transparent scanning:
- OpenAI (GPT-5.2, GPT-5) -
api.lockllm.com/v1/proxy/openai - Anthropic (Claude Opus 4.5, Sonnet 4.5) -
api.lockllm.com/v1/proxy/anthropic - Google Gemini (Gemini 3 Pro, Gemini 3 Flash) -
api.lockllm.com/v1/proxy/gemini - Mistral AI (Mistral 3, Mistral Large 3) -
api.lockllm.com/v1/proxy/mistral - DeepSeek (V3.2) -
api.lockllm.com/v1/proxy/deepseek - xAI Grok (Grok 4) -
api.lockllm.com/v1/proxy/xai - Cohere, OpenRouter, Perplexity, Groq, Together AI, Fireworks AI, Anyscale, Hugging Face, AWS Bedrock, Azure OpenAI, Google Vertex AI
Authentication
The proxy requires two keys: your original provider API key and your LockLLM API key. Pass your provider key as normal, and add your LockLLM key in the X-LockLLM-Key header.
Handling Blocked Requests
When a request is blocked due to security threats, the proxy returns a 400 error with details about the detected threats. You can catch this error and handle it appropriately in your application.
Getting Started
Generate your LockLLM API key in the dashboard, update your SDK base URL, and all requests are automatically scanned. Visit the documentation for complete setup guides for all supported providers.