Perplexity
Overview
Protect your Perplexity AI integration with LockLLM's transparent proxy layer. Route all Perplexity requests through LockLLM for automatic prompt injection scanning without modifying your application logic. Compatible with all Perplexity models designed for search-augmented generation and retrieval-augmented workflows.
The integration maintains Perplexity's complete API interface, including search grounding, citations, and real-time information retrieval. Your existing code works exactly as before, with added security protection.
How it Works
Update your Perplexity client's base URL to route through the LockLLM proxy. All requests are intercepted, prompts are scanned for security threats in real-time, and safe requests are forwarded to Perplexity. Malicious prompts are blocked before reaching the model.
The proxy adds 150-250ms overhead and operates transparently. All standard Perplexity features including search grounding, citation generation, and streaming work without modification.
Quick Start
Update your Perplexity client configuration to route through the proxy:
const client = new OpenAI({
baseURL: 'https://api.lockllm.com/v1/proxy/perplexity',
apiKey: process.env.PERPLEXITY_API_KEY,
defaultHeaders: {
'X-LockLLM-Key': process.env.LOCKLLM_API_KEY
}
})
// All requests are automatically scanned
const response = await client.chat.completions.create({
model: 'sonar-pro',
messages: [{ role: 'user', content: userInput }]
})Features
- Automatic Scanning: All prompts scanned without code changes
- Search Grounding: Compatible with search-augmented responses
- Citation Support: Preserves citation and source metadata
- Streaming Support: Works with streaming responses
- Real-Time Data: Maintains access to current information
- RAG Protection: Secures retrieval-augmented workflows
Getting Started
Generate API keys in the dashboard, update your client configuration with the proxy URL, and start making secure requests. Visit the documentation for complete setup guides and examples.