Proxy Mode (BYOK)

Use our proxy to automatically scan all LLM requests with zero code changes.

Link to section: What is Proxy Mode?What is Proxy Mode?

Proxy Mode is LockLLM's Bring Your Own Key (BYOK) solution that automatically scans all your LLM requests for prompt injection without changing your code. Simply change your base URL and add your provider API keys to the LockLLM dashboard - that's it!

Key benefits:

  • Zero code changes required
  • Works with official SDKs (OpenAI, Anthropic, etc.)
  • Automatic scanning of all requests
  • Supports 20+ LLM providers
  • Your API keys are securely stored
  • Completely free with unlimited usage

Link to section: How It WorksHow It Works

  1. Add your provider API key (OpenAI, Anthropic, etc.) to the LockLLM dashboard
  2. Change your base URL to https://api.lockllm.com/v1/proxy/{provider}
  3. All requests are automatically scanned before being forwarded to the provider
  4. Malicious requests are blocked, safe requests are forwarded
Your App → LockLLM Proxy (scan) → Provider (OpenAI/Anthropic/etc)
            ✓ Safe: Forward
            ✗ Malicious: Block

Link to section: Supported ProvidersSupported Providers

LockLLM proxy mode supports 17 providers plus custom providers with their own URLs:

ProviderProxy URL
OpenAIhttps://api.lockllm.com/v1/proxy/openai
Anthropichttps://api.lockllm.com/v1/proxy/anthropic
Google Geminihttps://api.lockllm.com/v1/proxy/gemini
Coherehttps://api.lockllm.com/v1/proxy/cohere
OpenRouterhttps://api.lockllm.com/v1/proxy/openrouter
Perplexityhttps://api.lockllm.com/v1/proxy/perplexity
Mistral AIhttps://api.lockllm.com/v1/proxy/mistral
Groqhttps://api.lockllm.com/v1/proxy/groq
DeepSeekhttps://api.lockllm.com/v1/proxy/deepseek
Together AIhttps://api.lockllm.com/v1/proxy/together
xAI (Grok)https://api.lockllm.com/v1/proxy/xai
Fireworks AIhttps://api.lockllm.com/v1/proxy/fireworks
Anyscalehttps://api.lockllm.com/v1/proxy/anyscale
Hugging Facehttps://api.lockllm.com/v1/proxy/huggingface
Azure OpenAIhttps://api.lockllm.com/v1/proxy/azure
AWS Bedrockhttps://api.lockllm.com/v1/proxy/bedrock
Google Vertex AIhttps://api.lockllm.com/v1/proxy/vertex-ai

Link to section: Custom EndpointsCustom Endpoints

All providers support custom endpoint URLs for self-hosted models, alternative endpoints, or proxy/gateway services. When adding your API key in the dashboard, you can optionally specify a custom endpoint URL to override the default.

Link to section: Quick StartQuick Start

Link to section: Step 1: Add Your Provider API Key to DashboardStep 1: Add Your Provider API Key to Dashboard

  1. Sign in to your LockLLM dashboard
  2. Navigate to Proxy Settings (or API Keys)
  3. Click Add API Key
  4. Select your provider (e.g., OpenAI, Anthropic, Azure)
  5. Enter your provider API key (from OpenAI, Anthropic, etc.)
  6. Give it a nickname (optional, e.g., "Production Key")
  7. Click Add API Key

Link to section: Step 2: Get Your LockLLM API KeyStep 2: Get Your LockLLM API Key

  1. In the LockLLM dashboard, go to API Keys section
  2. Copy your LockLLM API key (starts with llm_)
  3. You'll use this to authenticate proxy requests (not your provider key!)

Link to section: Step 3: Update Your CodeStep 3: Update Your Code

Change your SDK configuration to use LockLLM's proxy. Important: Pass your LockLLM API key (not your provider key) for authentication.

Link to section: OpenAI SDK (JavaScript/TypeScript)OpenAI SDK (JavaScript/TypeScript)

const OpenAI = require('openai')

const openai = new OpenAI({
  apiKey: process.env.LOCKLLM_API_KEY,  // Your LockLLM API key (llm_...)
  baseURL: 'https://api.lockllm.com/v1/proxy/openai'
})

// All requests automatically scanned!
const response = await openai.chat.completions.create({
  model: 'gpt-4',
  messages: [{ role: 'user', content: userPrompt }]
})

Link to section: Anthropic SDK (JavaScript/TypeScript)Anthropic SDK (JavaScript/TypeScript)

const Anthropic = require('@anthropic-ai/sdk')

const anthropic = new Anthropic({
  apiKey: process.env.LOCKLLM_API_KEY,  // Your LockLLM API key (llm_...)
  baseURL: 'https://api.lockllm.com/v1/proxy/anthropic'
})

// Automatically scanned!
const response = await anthropic.messages.create({
  model: 'claude-3-opus-20240229',
  max_tokens: 1024,
  messages: [{ role: 'user', content: userPrompt }]
})

Link to section: Python OpenAIPython OpenAI

import os
from openai import OpenAI

client = OpenAI(
    api_key=os.environ.get('LOCKLLM_API_KEY'),  # Your LockLLM API key (llm_...)
    base_url='https://api.lockllm.com/v1/proxy/openai'
)

# Automatically scanned!
response = client.chat.completions.create(
    model='gpt-4',
    messages=[{'role': 'user', 'content': user_prompt}]
)

Link to section: Python AnthropicPython Anthropic

import os
from anthropic import Anthropic

client = Anthropic(
    api_key=os.environ.get('LOCKLLM_API_KEY'),  # Your LockLLM API key (llm_...)
    base_url='https://api.lockllm.com/v1/proxy/anthropic'
)

# Automatically scanned!
response = client.messages.create(
    model='claude-3-opus-20240229',
    max_tokens=1024,
    messages=[{'role': 'user', 'content': user_prompt}]
)

Link to section: How Authentication WorksHow Authentication Works

Important: Understanding the two types of API keys:

  1. Provider API Key (OpenAI, Anthropic, etc.):

    • You add this to the LockLLM dashboard once
    • Stored securely and encrypted
    • Never put this in your code when using the proxy
  2. LockLLM API Key (starts with llm_):

    • You pass this in your SDK configuration (apiKey parameter)
    • Authenticates your requests to the LockLLM proxy
    • This is what goes in your code

Link to section: Provider-Specific ConfigurationProvider-Specific Configuration

Link to section: Azure OpenAIAzure OpenAI

Azure requires additional configuration:

Dashboard Setup:

  1. Select Azure OpenAI as provider
  2. Enter your Azure OpenAI API key
  3. Enter your Endpoint URL (e.g., https://your-resource.openai.azure.com)
  4. Enter your Deployment Name (e.g., gpt-4)
  5. Enter API Version (e.g., 2024-10-21) - optional, defaults to latest

Code:

const OpenAI = require('openai')

const client = new OpenAI({
  apiKey: process.env.LOCKLLM_API_KEY,  // Your LockLLM API key (NOT Azure key)
  baseURL: 'https://api.lockllm.com/v1/proxy/azure'
})

// Use Azure OpenAI models
const response = await client.chat.completions.create({
  model: 'gpt-4',  // Uses your configured deployment
  messages: [{ role: 'user', content: userPrompt }]
})

Azure API Format Support:

LockLLM supports both Azure OpenAI API formats:

  1. Legacy format (deployment-based): /openai/deployments/{deployment}/chat/completions?api-version=2024-10-21
  2. v1 API format (preview): /openai/v1/chat/completions?api-version=2024-10-21 with deployment in header

The proxy automatically handles both formats. When you configure your deployment name in the dashboard, it's used for all requests.

Link to section: AWS BedrockAWS Bedrock

Bedrock requires AWS credentials:

Dashboard Setup:

  1. Select AWS Bedrock as provider
  2. Enter your AWS credentials JSON:
{
  "accessKeyId": "your-access-key",
  "secretAccessKey": "your-secret-key",
  "region": "us-east-1"
}

Code:

// Use with Bedrock-compatible SDK or direct fetch
const response = await fetch('https://api.lockllm.com/v1/proxy/bedrock/model/anthropic.claude-3-sonnet-20240229-v1:0/invoke', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'Authorization': 'Bearer YOUR_LOCKLLM_API_KEY'  // Your LockLLM API key
  },
  body: JSON.stringify({
    anthropic_version: 'bedrock-2023-05-31',
    max_tokens: 1024,
    messages: [{ role: 'user', content: userPrompt }]
  })
})

Link to section: Google Vertex AIGoogle Vertex AI

Vertex AI requires service account credentials:

Dashboard Setup:

  1. Select Vertex AI as provider
  2. Enter your Google Cloud project ID
  3. Enter your service account JSON key

Code:

// Use with Vertex AI-compatible SDK or direct fetch
const response = await fetch('https://api.lockllm.com/v1/proxy/vertex-ai/v1/projects/YOUR_PROJECT/locations/us-central1/publishers/anthropic/models/claude-3-opus@20240229:streamRawPredict', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'Authorization': 'Bearer YOUR_LOCKLLM_API_KEY'  // Your LockLLM API key
  },
  body: JSON.stringify({
    anthropic_version: 'vertex-2023-10-16',
    max_tokens: 1024,
    messages: [{ role: 'user', content: userPrompt }]
  })
})

Link to section: Multiple Keys for Same ProviderMultiple Keys for Same Provider

You can add multiple API keys for the same provider with different nicknames:

Dashboard Setup:

  1. Add multiple keys for the same provider (e.g., "OpenAI - Production" and "OpenAI - Development")
  2. Give each a unique nickname
  3. Only one can be enabled at a time for each provider

Code:

// The proxy uses the enabled key for the provider
const openai = new OpenAI({
  apiKey: process.env.LOCKLLM_API_KEY,  // Your LockLLM API key
  baseURL: 'https://api.lockllm.com/v1/proxy/openai'
})

// Whichever OpenAI key is enabled in your dashboard will be used

Note: The proxy automatically selects the enabled key for each provider. To switch between keys, enable/disable them in the dashboard.

Link to section: Managing Provider KeysManaging Provider Keys

Link to section: View Your KeysView Your Keys

Visit the Proxy Settings page in your dashboard to see all configured provider keys:

  • Provider name
  • Nickname
  • Last used timestamp
  • Enable/disable toggle
  • Delete option

Link to section: Enable/Disable KeysEnable/Disable Keys

Toggle keys on/off without deleting them:

  1. Go to Proxy Settings
  2. Find your key
  3. Click the toggle switch
  4. Disabled keys won't be used for proxying

Link to section: Delete KeysDelete Keys

Permanently remove provider keys:

  1. Go to Proxy Settings
  2. Find your key
  3. Click Delete
  4. Confirm deletion

Link to section: SecuritySecurity

Link to section: Your Provider API Keys Are SecureYour Provider API Keys Are Secure

  • Provider API keys (OpenAI, Anthropic, etc.) are encrypted at rest using industry-standard encryption
  • Keys are stored securely in our database and never exposed in API responses
  • Only you can view or manage your keys through the dashboard
  • Keys are never logged or included in error messages

Link to section: Request AuthenticationRequest Authentication

Standard authentication (recommended):

const openai = new OpenAI({
  apiKey: process.env.LOCKLLM_API_KEY,  // Your LockLLM API key (llm_...)
  baseURL: 'https://api.lockllm.com/v1/proxy/openai'
})

Alternative: Using Authorization header directly:

If you need to pass the LockLLM API key separately (e.g., for custom implementations):

const openai = new OpenAI({
  apiKey: 'dummy-key',  // SDK requires a value, but proxy ignores this
  baseURL: 'https://api.lockllm.com/v1/proxy/openai',
  defaultHeaders: {
    'Authorization': 'Bearer YOUR_LOCKLLM_API_KEY'
  }
})

The proxy checks both the apiKey parameter and the Authorization header for your LockLLM API key.

Link to section: Blocked RequestsBlocked Requests

When a malicious prompt is detected, the proxy returns a 400 Bad Request error with the following structure:

{
  "error": {
    "message": "Malicious prompt detected by LockLLM",
    "type": "lockllm_security_error",
    "code": "prompt_injection_detected",
    "scan_result": {
      "safe": false,
      "label": 1,
      "confidence": 0.95,
      "injection": 0.95,
      "sensitivity": "medium"
    },
    "request_id": "req_abc123"
  }
}

Response Headers:

  • X-Request-Id: Unique request identifier
  • X-LockLLM-Blocked: "true" (indicates request was blocked)

Handle blocked requests in your application:

try {
  const response = await openai.chat.completions.create({
    model: 'gpt-4',
    messages: [{ role: 'user', content: userPrompt }]
  })
} catch (error) {
  // Check if error is from LockLLM security block
  if (error.response?.status === 400 && error.response?.data?.error?.code === 'prompt_injection_detected') {
    console.log('Malicious prompt blocked by LockLLM')
    const scanResult = error.response.data.error.scan_result
    console.log('Injection confidence:', scanResult.injection)
    console.log('Request ID:', error.response.data.error.request_id)

    // Handle security incident (log, alert, etc.)
    // You can find this request in your LockLLM dashboard logs
  } else {
    // Handle other errors
    throw error
  }
}

Successful requests (safe prompts) return:

  • Normal provider response (OpenAI, Anthropic, etc.)
  • Additional headers:
    • X-Request-Id: Unique request identifier
    • X-LockLLM-Scanned: "true" (indicates request was scanned)
    • X-LockLLM-Safe: "true" (indicates prompt was safe)

Link to section: Detection SettingsDetection Settings

Link to section: Scan ResultsScan Results

Every request is scanned and returns a confidence score indicating whether the prompt is safe or potentially malicious.

Scan result format:

{
  "safe": true,
  "label": 0,         // 0 = safe, 1 = malicious
  "confidence": 0.92, // Confidence in the prediction (0-1)
  "injection": 0.08,  // Injection score (0-1, higher = more likely malicious)
  "sensitivity": "medium"
}

The proxy automatically blocks high-confidence malicious prompts while allowing safe requests to proceed.

Link to section: MonitoringMonitoring

Link to section: View Proxy LogsView Proxy Logs

All proxy requests are logged in your dashboard:

  1. Go to Activity Logs
  2. Filter by Proxy Requests
  3. See scan results, providers, models used
  4. View blocked requests

Link to section: Webhook NotificationsWebhook Notifications

Get notified when malicious prompts are detected:

  1. Go to Webhooks
  2. Add a webhook URL
  3. Choose format (raw, Slack, Discord)
  4. Receive alerts for blocked requests

Learn more about Webhooks

Link to section: PerformancePerformance

Link to section: Does Proxy Mode Add Latency?Does Proxy Mode Add Latency?

Minimal latency is added:

  • Scanning: ~100-200ms
  • Network overhead: ~50ms
  • Total: ~150-250ms additional latency

For most applications, this is negligible compared to LLM response times (1-10 seconds).

Link to section: CachingCaching

Proxy mode automatically caches scan results for identical prompts, reducing latency on repeated requests.

Link to section: TroubleshootingTroubleshooting

Link to section: "Unauthorized" Error (401)"Unauthorized" Error (401)

Problem: Your LockLLM API key is missing or invalid.

Solution:

  1. Verify you're passing your LockLLM API key (not provider key) in the apiKey parameter
  2. Check that your LockLLM API key starts with llm_
  3. Ensure you haven't revoked or deleted the API key in your dashboard
  4. Verify you're authenticated and the key is valid

Link to section: "No [provider] API key configured" Error (400)"No [provider] API key configured" Error (400)

Problem: You haven't added the provider's API key to the dashboard, or it's disabled.

Error message example:

{
  "error": {
    "message": "No openai API key configured. Please add your API key at the dashboard.",
    "type": "lockllm_config_error",
    "code": "no_upstream_key"
  }
}

Solution:

  1. Go to your LockLLM dashboard → Proxy Settings
  2. Click "Add API Key" and select your provider (e.g., OpenAI)
  3. Enter your provider API key and save
  4. Ensure the key is enabled (toggle should be on)

Link to section: "Could not extract prompt from request" Error (400)"Could not extract prompt from request" Error (400)

Problem: The request body format is not recognized.

Solution:

  1. Ensure you're using a supported API format for your provider
  2. Check that your request has the correct structure (e.g., messages array for OpenAI/Anthropic)
  3. Verify the SDK version is compatible

Link to section: Requests Not Being ScannedRequests Not Being Scanned

Problem: Requests bypass the proxy or fail silently.

Solution:

  1. Verify you're using the correct base URL: https://api.lockllm.com/v1/proxy/{provider}
  2. Check that you added the provider key to dashboard
  3. Ensure the provider key is enabled (not disabled)
  4. Confirm you're passing your LockLLM API key for authentication

Link to section: Azure-Specific ErrorsAzure-Specific Errors

Problem: Azure requests fail with "azure_config_error".

Solution:

  1. Verify endpoint URL format: https://your-resource.openai.azure.com (no trailing slash)
  2. Check deployment name matches your Azure deployment exactly
  3. Ensure API version is compatible (default: 2024-10-21)
  4. Confirm you added all required fields in the dashboard:
    • Azure API key
    • Endpoint URL
    • Deployment name

Link to section: Rate Limit Exceeded ErrorRate Limit Exceeded Error

Problem: Too many requests in a short time.

Error message:

{
  "error": {
    "message": "Rate limit exceeded (1000/minute).",
    "type": "rate_limit_error"
  }
}

Solution:

  1. The proxy has a rate limit (default: 1000 requests/minute)
  2. Implement exponential backoff in your application
  3. Contact [email protected] if you need higher limits

Link to section: FAQFAQ

Link to section: What is BYOK (Bring Your Own Key)?What is BYOK (Bring Your Own Key)?

BYOK means you use your own API keys from OpenAI, Anthropic, etc. You add them to the LockLLM dashboard, and LockLLM proxies requests using your keys. You maintain full control over your keys, and billing stays with your provider account. LockLLM never charges you for the LLM usage itself.

Link to section: How does authentication work?How does authentication work?

There are two types of API keys:

  1. Provider API Key (OpenAI, Anthropic, etc.): You add this to the LockLLM dashboard once. It's stored encrypted and never put in your code.

  2. LockLLM API Key (starts with llm_): You pass this in your SDK configuration. It authenticates your requests and tells the proxy which provider keys to use.

In your code, you only use your LockLLM API key. The proxy handles retrieving and using your provider keys securely.

Link to section: Are my provider API keys secure?Are my provider API keys secure?

Yes! Your API keys are:

  • Encrypted at rest using industry-grade encryption
  • Stored securely in our database
  • Never exposed in API responses or logs
  • Never transmitted to your application
  • Completely unaccessible

Link to section: Which providers are supported?Which providers are supported?

17 providers with full support:

  • OpenAI, Anthropic, Google Gemini, Cohere
  • Azure OpenAI, AWS Bedrock, Google Vertex AI
  • OpenRouter, Perplexity, Mistral AI
  • Groq, DeepSeek, Together AI
  • xAI (Grok), Fireworks AI, Anyscale
  • Hugging Face

All providers support custom endpoint URLs for self-hosted or alternative endpoints.

Link to section: How do I configure Azure OpenAI?How do I configure Azure OpenAI?

Azure OpenAI requires additional configuration:

  1. In the dashboard, select Azure OpenAI as provider

  2. Enter:

    • API key: Your Azure OpenAI key (or Microsoft Entra ID token)
    • Endpoint URL: https://your-resource.openai.azure.com
    • Deployment name: Your Azure deployment name (e.g., gpt-4)
    • API version (optional): Defaults to 2024-10-21
  3. In your code:

const openai = new OpenAI({
  apiKey: process.env.LOCKLLM_API_KEY,
  baseURL: 'https://api.lockllm.com/v1/proxy/azure'
})

LockLLM supports both Azure API formats (legacy deployment-based and new v1 API).

Link to section: Does this add latency to my requests?Does this add latency to my requests?

Yes, approximately 150-250ms for scanning:

  • Scanning: ~100-200ms
  • Network overhead: ~50ms

This is minimal compared to typical LLM response times (1-10+ seconds) and provides critical security protection. The proxy caches scan results for identical prompts to reduce latency on repeated requests.

Link to section: Will this work with official SDKs?Will this work with official SDKs?

Yes! Proxy mode works seamlessly with official SDKs:

  • OpenAI SDK (Node.js, Python): Just change baseURL parameter
  • Anthropic SDK (Node.js, Python): Just change baseURL parameter
  • Other provider SDKs: Works with any SDK that supports custom base URLs

The proxy is fully compatible with all SDK features including streaming, function calling, and multi-modal inputs.

Link to section: Can I use multiple keys for the same provider?Can I use multiple keys for the same provider?

Yes! You can add multiple keys for the same provider with different nicknames (e.g., "Production" and "Development"). However, only one key per provider can be enabled at a time.

To switch between keys, enable/disable them in the dashboard. The proxy automatically uses the enabled key for each provider.

Link to section: What happens when a malicious prompt is detected?What happens when a malicious prompt is detected?

The proxy:

  1. Blocks the request immediately (doesn't forward to provider)

  2. Returns a 400 Bad Request error with:

    • Error code: prompt_injection_detected
    • Scan result with injection confidence score
    • Unique request ID for tracking
  3. Logs the event in your dashboard (without storing prompt content)

  4. Can trigger webhook notifications if configured

You can catch this error in your code and handle it appropriately (e.g., alert security team, log incident, show user error message).

Link to section: Do you log or store my prompts?Do you log or store my prompts?

No. We do not store prompt content. We only log:

  • Metadata (timestamp, model, provider, request ID)
  • Scan results (safe/malicious, confidence scores)
  • Prompt length (character count)

Prompt content is scanned in memory and immediately discarded. This ensures privacy while providing security monitoring.

Link to section: How does the detection work?How does the detection work?

The proxy scans every request and assigns confidence scores:

  • Injection score: 0 (definitely safe) to 1 (definitely malicious)
  • High-confidence malicious prompts are automatically blocked
  • Safe prompts are forwarded to your provider

The detection system is tuned to balance security and minimize false positives. Contact [email protected] if you need custom detection settings for your use case.

Link to section: Can I test the proxy without adding my real API keys?Can I test the proxy without adding my real API keys?

Yes! You can test with:

  1. OpenAI-compatible test endpoints: Use a local LLM server or test API
  2. Custom endpoints: Point to your staging/test environments
  3. Provider test keys: Use provider-issued test/sandbox keys

Just add the test configuration in the dashboard and point your SDK to the proxy.

Updated 2 days ago