DeepSeek
Overview
Protect your DeepSeek integration with LockLLM's transparent proxy layer. Route all DeepSeek requests through LockLLM for automatic prompt injection scanning without modifying your application logic. Compatible with DeepSeek V3.2 and all DeepSeek model variants optimized for reasoning and coding tasks.
How it Works
Update your DeepSeek client's base URL to route through the LockLLM proxy. All requests are intercepted, prompts are scanned for security threats in real-time, and safe requests are forwarded to DeepSeek. The proxy maintains DeepSeek's reasoning capabilities while adding security protection.
Supported Models
All DeepSeek models are supported through the proxy:
- DeepSeek V3.2 (latest version)
- DeepSeek Coder (specialized for code generation)
- DeepSeek Chat (conversational models)
- All legacy DeepSeek variants
Quick Start
const client = new OpenAI({
baseURL: 'https://api.lockllm.com/v1/proxy/deepseek',
apiKey: process.env.LOCKLLM_API_KEY // Your LockLLM API key
})
// All requests are automatically scanned
const response = await client.chat.completions.create({
model: 'deepseek-v3.2',
messages: [{ role: 'user', content: userInput }]
})Features
- Automatic Scanning: All prompts scanned without code changes
- Reasoning Support: Compatible with DeepSeek's reasoning capabilities
- Code Generation: Protects code-focused interactions
- Streaming Support: Works with streaming responses
- Function Calling: Preserves tool use capabilities
Getting Started
Generate API keys in the dashboard and visit the documentation for complete setup guides.