Anthropic Claude
Overview
Protect your Anthropic Claude API integration with LockLLM's transparent proxy layer. Route all Claude requests through LockLLM for automatic prompt injection scanning without modifying your application logic. Compatible with Claude Opus 4.5, Sonnet 4.5, Haiku, and all Claude model variants.
The integration maintains Claude's complete API interface, including streaming responses, tool use, vision capabilities, and extended context windows. Your existing code works exactly as before, with added security protection.
How it Works
Update your Anthropic client's base URL to route through the LockLLM proxy. All requests are intercepted, prompts are scanned for security threats in real-time, and safe requests are forwarded to Claude. Malicious prompts are blocked before reaching the model.
The proxy adds 150-250ms overhead and operates transparently. All standard Claude features including streaming, tool use, vision, and extended context work without modification.
Supported Models
All Anthropic Claude models are supported through the proxy:
- Claude Opus 4.5 (highest capability)
- Claude Sonnet 4.5 (balanced performance)
- Claude Haiku (fast responses)
- All legacy Claude 3 variants
- Vision-enabled models
- Extended context models
Quick Start
Update your Anthropic client configuration to route through the proxy:
import Anthropic from '@anthropic-ai/sdk'
const client = new Anthropic({
baseURL: 'https://api.lockllm.com/v1/proxy/anthropic',
apiKey: process.env.ANTHROPIC_API_KEY,
defaultHeaders: {
'X-LockLLM-Key': process.env.LOCKLLM_API_KEY
}
})
// All requests are automatically scanned
const response = await client.messages.create({
model: 'claude-opus-4.5',
max_tokens: 1024,
messages: [{ role: 'user', content: userInput }]
})Python Example
from anthropic import Anthropic
client = Anthropic(
base_url="https://api.lockllm.com/v1/proxy/anthropic",
api_key=os.environ["ANTHROPIC_API_KEY"],
default_headers={
"X-LockLLM-Key": os.environ["LOCKLLM_API_KEY"]
}
)
# Automatic security scanning
response = client.messages.create(
model="claude-opus-4.5",
max_tokens=1024,
messages=[{"role": "user", "content": user_input}]
)Features
- Automatic Scanning: All prompts scanned without code changes
- Full Model Support: Works with all Claude models and variants
- Streaming Support: Compatible with streaming responses
- Tool Use: Preserves Claude's tool/function calling capabilities
- Vision Support: Works with image inputs and vision models
- Extended Context: Compatible with large context windows
Configuration
Configure security behavior using request headers:
X-LockLLM-Key: Your LockLLM API key (required)
Getting Started
Generate API keys in the dashboard, update your client configuration with the proxy URL, and start making secure requests. Visit the documentation for complete setup guides, examples, and best practices.