Proxy

Overview

The LockLLM Proxy provides automatic security scanning for LLM API requests with advanced features like custom content policies, smart routing, and AI abuse detection. Route your requests through LockLLM's gateway, and all prompts are scanned before reaching your LLM provider. Compatible with 17+ providers plus custom endpoints.

The proxy acts as a transparent security layer between your application and LLM providers. Requests are scanned in real-time with minimal latency impact. Unsafe prompts are blocked, policy violations are detected, and smart routing optimizes costs by auto-selecting the best model for each task.

How it Works

Configure your LLM client to route requests through the LockLLM proxy endpoint. The proxy intercepts requests, scans the prompt for security threats, and forwards safe requests to the original provider. Unsafe requests are blocked with detailed threat information.

The proxy preserves the original API interface, so your existing code continues to work without modification. All standard features like streaming, function calling, and error handling work exactly as expected.

Supported Providers

The proxy supports 17+ LLM providers:

  • OpenAI (GPT-5.2, GPT-5)
  • Anthropic (Claude Opus 4.5, Sonnet 4.5)
  • Google Gemini (Gemini 3 Pro, Gemini 3 Flash)
  • Mistral AI (Mistral 3, Mistral Large 3)
  • DeepSeek (V3.2)
  • xAI Grok (Grok 4)
  • Cohere, Azure OpenAI, OpenRouter, Perplexity, Groq, Together AI, Fireworks AI, Anyscale, Hugging Face, AWS Bedrock, Google Vertex AI

Quick Start

Update your client configuration to route through the proxy:

const client = new OpenAI({
  apiKey: process.env.LOCKLLM_API_KEY,  // Your LockLLM API key
  baseURL: 'https://api.lockllm.com/v1/proxy/openai'
})

// All requests are automatically scanned
const response = await client.chat.completions.create({
  model: 'gpt-5.2',
  messages: [{ role: 'user', content: userInput }]
})

Features

  • Automatic Scanning: All requests scanned without additional code
  • Multi-Provider Support: Works with 17+ LLM providers plus custom endpoints
  • Minimal Latency: 150-250ms scanning overhead
  • Custom Content Policies: Enforce your own content restrictions on top of our default content moderation
  • Smart Routing: Auto-select optimal models for cost and quality
  • AI Abuse Detection: Protect against bot-generated and automated requests
  • Configurable Actions: Choose to block, flag, or allow risky prompts
  • Streaming Support: Works with streaming responses
  • Transparent Integration: No changes to application logic required

Authentication

Add your provider API keys (OpenAI, Anthropic, etc.) to the LockLLM dashboard, then pass your LockLLM API key in your client configuration. The proxy automatically uses your stored provider keys to forward requests.

Your provider API keys are encrypted and securely stored. You only use your LockLLM API key in your code.

Getting Started

Generate API keys in the dashboard, update your client configuration, and start scanning. Visit the documentation for complete setup guides for all supported providers.