Proxy Mode (BYOK)
Use our proxy to automatically scan all LLM requests with zero code changes.
Link to section: What is Proxy Mode?What is Proxy Mode?
Proxy Mode is LockLLM's Bring Your Own Key (BYOK) solution that automatically scans all your LLM requests for prompt injection without changing your code. Simply change your base URL and add your provider API keys to the LockLLM dashboard - that's it!
Key benefits:
- Zero code changes required
- Works with official SDKs (OpenAI, Anthropic, etc.)
- Automatic scanning of all requests
- Supports 20+ LLM providers
- Your API keys are securely stored
- Completely free with unlimited usage
Link to section: How It WorksHow It Works
- Add your provider API key (OpenAI, Anthropic, etc.) to the LockLLM dashboard
- Change your base URL to
https://api.lockllm.com/v1/proxy/{provider} - All requests are automatically scanned before being forwarded to the provider
- Malicious requests are blocked, safe requests are forwarded
Your App → LockLLM Proxy (scan) → Provider (OpenAI/Anthropic/etc)
✓ Safe: Forward
✗ Malicious: Block
Link to section: Supported ProvidersSupported Providers
LockLLM proxy mode supports 17 providers plus custom providers with their own URLs:
| Provider | Proxy URL |
|---|---|
| OpenAI | https://api.lockllm.com/v1/proxy/openai |
| Anthropic | https://api.lockllm.com/v1/proxy/anthropic |
| Google Gemini | https://api.lockllm.com/v1/proxy/gemini |
| Cohere | https://api.lockllm.com/v1/proxy/cohere |
| OpenRouter | https://api.lockllm.com/v1/proxy/openrouter |
| Perplexity | https://api.lockllm.com/v1/proxy/perplexity |
| Mistral AI | https://api.lockllm.com/v1/proxy/mistral |
| Groq | https://api.lockllm.com/v1/proxy/groq |
| DeepSeek | https://api.lockllm.com/v1/proxy/deepseek |
| Together AI | https://api.lockllm.com/v1/proxy/together |
| xAI (Grok) | https://api.lockllm.com/v1/proxy/xai |
| Fireworks AI | https://api.lockllm.com/v1/proxy/fireworks |
| Anyscale | https://api.lockllm.com/v1/proxy/anyscale |
| Hugging Face | https://api.lockllm.com/v1/proxy/huggingface |
| Azure OpenAI | https://api.lockllm.com/v1/proxy/azure |
| AWS Bedrock | https://api.lockllm.com/v1/proxy/bedrock |
| Google Vertex AI | https://api.lockllm.com/v1/proxy/vertex-ai |
Link to section: Custom EndpointsCustom Endpoints
All providers support custom endpoint URLs for self-hosted models, alternative endpoints, or proxy/gateway services. When adding your API key in the dashboard, you can optionally specify a custom endpoint URL to override the default.
Link to section: Quick StartQuick Start
Link to section: Step 1: Add Your Provider API Key to DashboardStep 1: Add Your Provider API Key to Dashboard
- Sign in to your LockLLM dashboard
- Navigate to Proxy Settings (or API Keys)
- Click Add API Key
- Select your provider (e.g., OpenAI, Anthropic, Azure)
- Enter your provider API key (from OpenAI, Anthropic, etc.)
- Give it a nickname (optional, e.g., "Production Key")
- Click Add API Key
Link to section: Step 2: Get Your LockLLM API KeyStep 2: Get Your LockLLM API Key
- In the LockLLM dashboard, go to API Keys section
- Copy your LockLLM API key (starts with
llm_) - You'll use this to authenticate proxy requests (not your provider key!)
Link to section: Step 3: Update Your CodeStep 3: Update Your Code
Change your SDK configuration to use LockLLM's proxy. Important: Pass your LockLLM API key (not your provider key) for authentication.
Link to section: OpenAI SDK (JavaScript/TypeScript)OpenAI SDK (JavaScript/TypeScript)
const OpenAI = require('openai')
const openai = new OpenAI({
apiKey: process.env.LOCKLLM_API_KEY, // Your LockLLM API key (llm_...)
baseURL: 'https://api.lockllm.com/v1/proxy/openai'
})
// All requests automatically scanned!
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: userPrompt }]
})
Link to section: Anthropic SDK (JavaScript/TypeScript)Anthropic SDK (JavaScript/TypeScript)
const Anthropic = require('@anthropic-ai/sdk')
const anthropic = new Anthropic({
apiKey: process.env.LOCKLLM_API_KEY, // Your LockLLM API key (llm_...)
baseURL: 'https://api.lockllm.com/v1/proxy/anthropic'
})
// Automatically scanned!
const response = await anthropic.messages.create({
model: 'claude-3-opus-20240229',
max_tokens: 1024,
messages: [{ role: 'user', content: userPrompt }]
})
Link to section: Python OpenAIPython OpenAI
import os
from openai import OpenAI
client = OpenAI(
api_key=os.environ.get('LOCKLLM_API_KEY'), # Your LockLLM API key (llm_...)
base_url='https://api.lockllm.com/v1/proxy/openai'
)
# Automatically scanned!
response = client.chat.completions.create(
model='gpt-4',
messages=[{'role': 'user', 'content': user_prompt}]
)
Link to section: Python AnthropicPython Anthropic
import os
from anthropic import Anthropic
client = Anthropic(
api_key=os.environ.get('LOCKLLM_API_KEY'), # Your LockLLM API key (llm_...)
base_url='https://api.lockllm.com/v1/proxy/anthropic'
)
# Automatically scanned!
response = client.messages.create(
model='claude-3-opus-20240229',
max_tokens=1024,
messages=[{'role': 'user', 'content': user_prompt}]
)
Link to section: How Authentication WorksHow Authentication Works
Important: Understanding the two types of API keys:
-
Provider API Key (OpenAI, Anthropic, etc.):
- You add this to the LockLLM dashboard once
- Stored securely and encrypted
- Never put this in your code when using the proxy
-
LockLLM API Key (starts with
llm_):- You pass this in your SDK configuration (
apiKeyparameter) - Authenticates your requests to the LockLLM proxy
- This is what goes in your code
- You pass this in your SDK configuration (
Link to section: Provider-Specific ConfigurationProvider-Specific Configuration
Link to section: Azure OpenAIAzure OpenAI
Azure requires additional configuration:
Dashboard Setup:
- Select Azure OpenAI as provider
- Enter your Azure OpenAI API key
- Enter your Endpoint URL (e.g.,
https://your-resource.openai.azure.com) - Enter your Deployment Name (e.g.,
gpt-4) - Enter API Version (e.g.,
2024-10-21) - optional, defaults to latest
Code:
const OpenAI = require('openai')
const client = new OpenAI({
apiKey: process.env.LOCKLLM_API_KEY, // Your LockLLM API key (NOT Azure key)
baseURL: 'https://api.lockllm.com/v1/proxy/azure'
})
// Use Azure OpenAI models
const response = await client.chat.completions.create({
model: 'gpt-4', // Uses your configured deployment
messages: [{ role: 'user', content: userPrompt }]
})
Azure API Format Support:
LockLLM supports both Azure OpenAI API formats:
- Legacy format (deployment-based):
/openai/deployments/{deployment}/chat/completions?api-version=2024-10-21 - v1 API format (preview):
/openai/v1/chat/completions?api-version=2024-10-21with deployment in header
The proxy automatically handles both formats. When you configure your deployment name in the dashboard, it's used for all requests.
Link to section: AWS BedrockAWS Bedrock
Bedrock requires AWS credentials:
Dashboard Setup:
- Select AWS Bedrock as provider
- Enter your AWS credentials JSON:
{
"accessKeyId": "your-access-key",
"secretAccessKey": "your-secret-key",
"region": "us-east-1"
}
Code:
// Use with Bedrock-compatible SDK or direct fetch
const response = await fetch('https://api.lockllm.com/v1/proxy/bedrock/model/anthropic.claude-3-sonnet-20240229-v1:0/invoke', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer YOUR_LOCKLLM_API_KEY' // Your LockLLM API key
},
body: JSON.stringify({
anthropic_version: 'bedrock-2023-05-31',
max_tokens: 1024,
messages: [{ role: 'user', content: userPrompt }]
})
})
Link to section: Google Vertex AIGoogle Vertex AI
Vertex AI requires service account credentials:
Dashboard Setup:
- Select Vertex AI as provider
- Enter your Google Cloud project ID
- Enter your service account JSON key
Code:
// Use with Vertex AI-compatible SDK or direct fetch
const response = await fetch('https://api.lockllm.com/v1/proxy/vertex-ai/v1/projects/YOUR_PROJECT/locations/us-central1/publishers/anthropic/models/claude-3-opus@20240229:streamRawPredict', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer YOUR_LOCKLLM_API_KEY' // Your LockLLM API key
},
body: JSON.stringify({
anthropic_version: 'vertex-2023-10-16',
max_tokens: 1024,
messages: [{ role: 'user', content: userPrompt }]
})
})
Link to section: Multiple Keys for Same ProviderMultiple Keys for Same Provider
You can add multiple API keys for the same provider with different nicknames:
Dashboard Setup:
- Add multiple keys for the same provider (e.g., "OpenAI - Production" and "OpenAI - Development")
- Give each a unique nickname
- Only one can be enabled at a time for each provider
Code:
// The proxy uses the enabled key for the provider
const openai = new OpenAI({
apiKey: process.env.LOCKLLM_API_KEY, // Your LockLLM API key
baseURL: 'https://api.lockllm.com/v1/proxy/openai'
})
// Whichever OpenAI key is enabled in your dashboard will be used
Note: The proxy automatically selects the enabled key for each provider. To switch between keys, enable/disable them in the dashboard.
Link to section: Managing Provider KeysManaging Provider Keys
Link to section: View Your KeysView Your Keys
Visit the Proxy Settings page in your dashboard to see all configured provider keys:
- Provider name
- Nickname
- Last used timestamp
- Enable/disable toggle
- Delete option
Link to section: Enable/Disable KeysEnable/Disable Keys
Toggle keys on/off without deleting them:
- Go to Proxy Settings
- Find your key
- Click the toggle switch
- Disabled keys won't be used for proxying
Link to section: Delete KeysDelete Keys
Permanently remove provider keys:
- Go to Proxy Settings
- Find your key
- Click Delete
- Confirm deletion
Link to section: SecuritySecurity
Link to section: Your Provider API Keys Are SecureYour Provider API Keys Are Secure
- Provider API keys (OpenAI, Anthropic, etc.) are encrypted at rest using industry-standard encryption
- Keys are stored securely in our database and never exposed in API responses
- Only you can view or manage your keys through the dashboard
- Keys are never logged or included in error messages
Link to section: Request AuthenticationRequest Authentication
Standard authentication (recommended):
const openai = new OpenAI({
apiKey: process.env.LOCKLLM_API_KEY, // Your LockLLM API key (llm_...)
baseURL: 'https://api.lockllm.com/v1/proxy/openai'
})
Alternative: Using Authorization header directly:
If you need to pass the LockLLM API key separately (e.g., for custom implementations):
const openai = new OpenAI({
apiKey: 'dummy-key', // SDK requires a value, but proxy ignores this
baseURL: 'https://api.lockllm.com/v1/proxy/openai',
defaultHeaders: {
'Authorization': 'Bearer YOUR_LOCKLLM_API_KEY'
}
})
The proxy checks both the apiKey parameter and the Authorization header for your LockLLM API key.
Link to section: Blocked RequestsBlocked Requests
When a malicious prompt is detected, the proxy returns a 400 Bad Request error with the following structure:
{
"error": {
"message": "Malicious prompt detected by LockLLM",
"type": "lockllm_security_error",
"code": "prompt_injection_detected",
"scan_result": {
"safe": false,
"label": 1,
"confidence": 0.95,
"injection": 0.95,
"sensitivity": "medium"
},
"request_id": "req_abc123"
}
}
Response Headers:
X-Request-Id: Unique request identifierX-LockLLM-Blocked:"true"(indicates request was blocked)
Handle blocked requests in your application:
try {
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: userPrompt }]
})
} catch (error) {
// Check if error is from LockLLM security block
if (error.response?.status === 400 && error.response?.data?.error?.code === 'prompt_injection_detected') {
console.log('Malicious prompt blocked by LockLLM')
const scanResult = error.response.data.error.scan_result
console.log('Injection confidence:', scanResult.injection)
console.log('Request ID:', error.response.data.error.request_id)
// Handle security incident (log, alert, etc.)
// You can find this request in your LockLLM dashboard logs
} else {
// Handle other errors
throw error
}
}
Successful requests (safe prompts) return:
- Normal provider response (OpenAI, Anthropic, etc.)
- Additional headers:
X-Request-Id: Unique request identifierX-LockLLM-Scanned:"true"(indicates request was scanned)X-LockLLM-Safe:"true"(indicates prompt was safe)
Link to section: Detection SettingsDetection Settings
Link to section: Scan ResultsScan Results
Every request is scanned and returns a confidence score indicating whether the prompt is safe or potentially malicious.
Scan result format:
{
"safe": true,
"label": 0, // 0 = safe, 1 = malicious
"confidence": 0.92, // Confidence in the prediction (0-1)
"injection": 0.08, // Injection score (0-1, higher = more likely malicious)
"sensitivity": "medium"
}
The proxy automatically blocks high-confidence malicious prompts while allowing safe requests to proceed.
Link to section: MonitoringMonitoring
Link to section: View Proxy LogsView Proxy Logs
All proxy requests are logged in your dashboard:
- Go to Activity Logs
- Filter by Proxy Requests
- See scan results, providers, models used
- View blocked requests
Link to section: Webhook NotificationsWebhook Notifications
Get notified when malicious prompts are detected:
- Go to Webhooks
- Add a webhook URL
- Choose format (raw, Slack, Discord)
- Receive alerts for blocked requests
Link to section: PerformancePerformance
Link to section: Does Proxy Mode Add Latency?Does Proxy Mode Add Latency?
Minimal latency is added:
- Scanning: ~100-200ms
- Network overhead: ~50ms
- Total: ~150-250ms additional latency
For most applications, this is negligible compared to LLM response times (1-10 seconds).
Link to section: CachingCaching
Proxy mode automatically caches scan results for identical prompts, reducing latency on repeated requests.
Link to section: TroubleshootingTroubleshooting
Link to section: "Unauthorized" Error (401)"Unauthorized" Error (401)
Problem: Your LockLLM API key is missing or invalid.
Solution:
- Verify you're passing your LockLLM API key (not provider key) in the
apiKeyparameter - Check that your LockLLM API key starts with
llm_ - Ensure you haven't revoked or deleted the API key in your dashboard
- Verify you're authenticated and the key is valid
Link to section: "No [provider] API key configured" Error (400)"No [provider] API key configured" Error (400)
Problem: You haven't added the provider's API key to the dashboard, or it's disabled.
Error message example:
{
"error": {
"message": "No openai API key configured. Please add your API key at the dashboard.",
"type": "lockllm_config_error",
"code": "no_upstream_key"
}
}
Solution:
- Go to your LockLLM dashboard → Proxy Settings
- Click "Add API Key" and select your provider (e.g., OpenAI)
- Enter your provider API key and save
- Ensure the key is enabled (toggle should be on)
Link to section: "Could not extract prompt from request" Error (400)"Could not extract prompt from request" Error (400)
Problem: The request body format is not recognized.
Solution:
- Ensure you're using a supported API format for your provider
- Check that your request has the correct structure (e.g.,
messagesarray for OpenAI/Anthropic) - Verify the SDK version is compatible
Link to section: Requests Not Being ScannedRequests Not Being Scanned
Problem: Requests bypass the proxy or fail silently.
Solution:
- Verify you're using the correct base URL:
https://api.lockllm.com/v1/proxy/{provider} - Check that you added the provider key to dashboard
- Ensure the provider key is enabled (not disabled)
- Confirm you're passing your LockLLM API key for authentication
Link to section: Azure-Specific ErrorsAzure-Specific Errors
Problem: Azure requests fail with "azure_config_error".
Solution:
- Verify endpoint URL format:
https://your-resource.openai.azure.com(no trailing slash) - Check deployment name matches your Azure deployment exactly
- Ensure API version is compatible (default:
2024-10-21) - Confirm you added all required fields in the dashboard:
- Azure API key
- Endpoint URL
- Deployment name
Link to section: Rate Limit Exceeded ErrorRate Limit Exceeded Error
Problem: Too many requests in a short time.
Error message:
{
"error": {
"message": "Rate limit exceeded (1000/minute).",
"type": "rate_limit_error"
}
}
Solution:
- The proxy has a rate limit (default: 1000 requests/minute)
- Implement exponential backoff in your application
- Contact [email protected] if you need higher limits
Link to section: FAQFAQ
Link to section: What is BYOK (Bring Your Own Key)?What is BYOK (Bring Your Own Key)?
BYOK means you use your own API keys from OpenAI, Anthropic, etc. You add them to the LockLLM dashboard, and LockLLM proxies requests using your keys. You maintain full control over your keys, and billing stays with your provider account. LockLLM never charges you for the LLM usage itself.
Link to section: How does authentication work?How does authentication work?
There are two types of API keys:
-
Provider API Key (OpenAI, Anthropic, etc.): You add this to the LockLLM dashboard once. It's stored encrypted and never put in your code.
-
LockLLM API Key (starts with
llm_): You pass this in your SDK configuration. It authenticates your requests and tells the proxy which provider keys to use.
In your code, you only use your LockLLM API key. The proxy handles retrieving and using your provider keys securely.
Link to section: Are my provider API keys secure?Are my provider API keys secure?
Yes! Your API keys are:
- Encrypted at rest using industry-grade encryption
- Stored securely in our database
- Never exposed in API responses or logs
- Never transmitted to your application
- Completely unaccessible
Link to section: Which providers are supported?Which providers are supported?
17 providers with full support:
- OpenAI, Anthropic, Google Gemini, Cohere
- Azure OpenAI, AWS Bedrock, Google Vertex AI
- OpenRouter, Perplexity, Mistral AI
- Groq, DeepSeek, Together AI
- xAI (Grok), Fireworks AI, Anyscale
- Hugging Face
All providers support custom endpoint URLs for self-hosted or alternative endpoints.
Link to section: How do I configure Azure OpenAI?How do I configure Azure OpenAI?
Azure OpenAI requires additional configuration:
-
In the dashboard, select Azure OpenAI as provider
-
Enter:
- API key: Your Azure OpenAI key (or Microsoft Entra ID token)
- Endpoint URL:
https://your-resource.openai.azure.com - Deployment name: Your Azure deployment name (e.g.,
gpt-4) - API version (optional): Defaults to
2024-10-21
-
In your code:
const openai = new OpenAI({
apiKey: process.env.LOCKLLM_API_KEY,
baseURL: 'https://api.lockllm.com/v1/proxy/azure'
})
LockLLM supports both Azure API formats (legacy deployment-based and new v1 API).
Link to section: Does this add latency to my requests?Does this add latency to my requests?
Yes, approximately 150-250ms for scanning:
- Scanning: ~100-200ms
- Network overhead: ~50ms
This is minimal compared to typical LLM response times (1-10+ seconds) and provides critical security protection. The proxy caches scan results for identical prompts to reduce latency on repeated requests.
Link to section: Will this work with official SDKs?Will this work with official SDKs?
Yes! Proxy mode works seamlessly with official SDKs:
- OpenAI SDK (Node.js, Python): Just change
baseURLparameter - Anthropic SDK (Node.js, Python): Just change
baseURLparameter - Other provider SDKs: Works with any SDK that supports custom base URLs
The proxy is fully compatible with all SDK features including streaming, function calling, and multi-modal inputs.
Link to section: Can I use multiple keys for the same provider?Can I use multiple keys for the same provider?
Yes! You can add multiple keys for the same provider with different nicknames (e.g., "Production" and "Development"). However, only one key per provider can be enabled at a time.
To switch between keys, enable/disable them in the dashboard. The proxy automatically uses the enabled key for each provider.
Link to section: What happens when a malicious prompt is detected?What happens when a malicious prompt is detected?
The proxy:
-
Blocks the request immediately (doesn't forward to provider)
-
Returns a 400 Bad Request error with:
- Error code:
prompt_injection_detected - Scan result with injection confidence score
- Unique request ID for tracking
- Error code:
-
Logs the event in your dashboard (without storing prompt content)
-
Can trigger webhook notifications if configured
You can catch this error in your code and handle it appropriately (e.g., alert security team, log incident, show user error message).
Link to section: Do you log or store my prompts?Do you log or store my prompts?
No. We do not store prompt content. We only log:
- Metadata (timestamp, model, provider, request ID)
- Scan results (safe/malicious, confidence scores)
- Prompt length (character count)
Prompt content is scanned in memory and immediately discarded. This ensures privacy while providing security monitoring.
Link to section: How does the detection work?How does the detection work?
The proxy scans every request and assigns confidence scores:
- Injection score: 0 (definitely safe) to 1 (definitely malicious)
- High-confidence malicious prompts are automatically blocked
- Safe prompts are forwarded to your provider
The detection system is tuned to balance security and minimize false positives. Contact [email protected] if you need custom detection settings for your use case.
Link to section: Can I test the proxy without adding my real API keys?Can I test the proxy without adding my real API keys?
Yes! You can test with:
- OpenAI-compatible test endpoints: Use a local LLM server or test API
- Custom endpoints: Point to your staging/test environments
- Provider test keys: Use provider-issued test/sandbox keys
Just add the test configuration in the dashboard and point your SDK to the proxy.