Proxy

Overview

The LockLLM Proxy provides automatic security scanning for LLM API requests. Route your requests through LockLLM's gateway, and all prompts are scanned for injection attacks before reaching your LLM provider. Compatible with 20+ providers including OpenAI, Anthropic, Google Gemini, and more.

The proxy acts as a transparent security layer between your application and LLM providers. Requests are scanned in real-time with minimal latency impact, and malicious prompts are blocked before they can cause harm.

How it Works

Configure your LLM client to route requests through the LockLLM proxy endpoint. The proxy intercepts requests, scans the prompt for security threats, and forwards safe requests to the original provider. Unsafe requests are blocked with detailed threat information.

The proxy preserves the original API interface, so your existing code continues to work without modification. All standard features like streaming, function calling, and error handling work exactly as expected.

Supported Providers

The proxy supports 20+ LLM providers:

  • OpenAI (GPT-5.2, GPT-5)
  • Anthropic (Claude Opus 4.5, Sonnet 4.5)
  • Google Gemini (Gemini 3 Pro, Gemini 3 Flash)
  • Mistral AI (Mistral 3, Mistral Large 3)
  • DeepSeek (V3.2)
  • xAI Grok (Grok 4)
  • Cohere, Azure OpenAI, OpenRouter, Perplexity, Groq, Together AI, Fireworks AI, Anyscale, Hugging Face, AWS Bedrock, Google Vertex AI

Quick Start

Update your client configuration to route through the proxy:

const client = new OpenAI({
  apiKey: process.env.LOCKLLM_API_KEY,  // Your LockLLM API key
  baseURL: 'https://api.lockllm.com/v1/proxy/openai'
})

// All requests are automatically scanned
const response = await client.chat.completions.create({
  model: 'gpt-5.2',
  messages: [{ role: 'user', content: userInput }]
})

Features

  • Automatic Scanning: All requests scanned without additional code
  • Multi-Provider Support: Works with 20+ LLM providers
  • Minimal Latency: 150-250ms scanning overhead
  • Configurable Blocking: Choose to block, flag, or allow risky prompts
  • Streaming Support: Works with streaming responses
  • Transparent Integration: No changes to application logic required

Authentication

Add your provider API keys (OpenAI, Anthropic, etc.) to the LockLLM dashboard, then pass your LockLLM API key in your client configuration. The proxy automatically uses your stored provider keys to forward requests.

Your provider API keys are encrypted and securely stored. You only use your LockLLM API key in your code.

Getting Started

Generate API keys in the dashboard, update your client configuration, and start scanning. Visit the documentation for complete setup guides for all supported providers.