OpenAI
Overview
Secure your OpenAI API integration with LockLLM's transparent proxy layer. Route all OpenAI requests through LockLLM for automatic prompt injection scanning without modifying your application logic. Compatible with GPT-5.2, GPT-5, and all OpenAI models including legacy versions.
The integration preserves OpenAI's complete API interface, including streaming responses, function calling, vision capabilities, and all advanced features. Your existing code continues to work exactly as before, with added security protection.
How it Works
Update your OpenAI client's base URL to route through the LockLLM proxy. All requests are intercepted, prompts are scanned for security threats in real-time, and safe requests are forwarded to OpenAI. Malicious prompts are blocked before reaching the model.
The proxy adds 150-250ms overhead and operates transparently. All standard OpenAI features including streaming, JSON mode, function calling, and vision work without modification.
Supported Models
All OpenAI models are supported through the proxy:
- GPT-5.2 (latest flagship model)
- GPT-5 series
- GPT-4 Turbo and GPT-4
- GPT-3.5 Turbo
- All vision-enabled models
- Fine-tuned custom models
Quick Start
Update your OpenAI client configuration to route through the proxy:
const client = new OpenAI({
baseURL: 'https://api.lockllm.com/v1/proxy/openai',
apiKey: process.env.OPENAI_API_KEY,
defaultHeaders: {
'X-LockLLM-Key': process.env.LOCKLLM_API_KEY
}
})
// All requests are automatically scanned
const response = await client.chat.completions.create({
model: 'gpt-5.2',
messages: [{ role: 'user', content: userInput }]
})Python Example
from openai import OpenAI
client = OpenAI(
base_url="https://api.lockllm.com/v1/proxy/openai",
api_key=os.environ["OPENAI_API_KEY"],
default_headers={
"X-LockLLM-Key": os.environ["LOCKLLM_API_KEY"]
}
)
# Automatic security scanning
response = client.chat.completions.create(
model="gpt-5.2",
messages=[{"role": "user", "content": user_input}]
)Features
- Automatic Scanning: All prompts scanned without code changes
- Full Model Support: Works with all GPT models and variants
- Streaming Support: Compatible with streaming responses
- Function Calling: Preserves function/tool calling capabilities
- Vision Support: Works with image inputs and GPT-4 Vision
- JSON Mode: Compatible with structured output features
Configuration
Configure security behavior using request headers:
X-LockLLM-Key: Your LockLLM API key (required)
Getting Started
Generate API keys in the dashboard, update your client configuration with the proxy URL, and start making secure requests. Visit the documentation for complete setup guides, examples, and best practices.