Integrations
Secure AI across your entire stack
Integrate LockLLM security into your workflow with our REST API, proxy gateway, browser extensions, or upcoming SDKs. Protect LLM applications from prompt injection, jailbreaks, and unsafe instructions with real-time threat detection.
Browser
Scan prompts before sending to ChatGPT, Claude, or any AI interface. Features popup scanning, right-click quick scan, auto-scan on copy/paste, and file upload scanning with real-time threat detection.
Edge ExtensionComing Soon
Microsoft Edge extension in development. Brings the same powerful prompt injection protection to Edge browser with real-time security scanning.
API
Language-agnostic HTTP endpoint for prompt security. Send POST requests to scan prompts for injection and jailbreaks with sub-100ms latency and structured JSON responses.
Automatic security scanning for LLM API requests. Route requests through LockLLM proxy with minimal code changes to protect against prompt injection before requests reach providers.
Drop-in security layer for existing SDK integrations. Change the base URL in OpenAI, Anthropic, and other official SDKs to route all prompts through security scanning.
AI Providers
Route OpenAI API requests through LockLLM proxy. Works with GPT-5.2, GPT-5, and all OpenAI models for transparent security scanning.
Secure Claude API requests with transparent proxy integration. Works with Claude Opus 4.5, Sonnet 4.5, and all Claude model variants.
Add security layer to Gemini 3 Pro, Gemini 3 Flash, and all Google AI models with proxy routing.
Route Cohere API requests through LockLLM. Compatible with Command, Embed, and all Cohere models.
Protection for GPT models hosted on Azure infrastructure. Secure Azure OpenAI Service deployments with transparent proxy routing.
Unified security across multiple LLM providers. Route OpenRouter requests through LockLLM for consolidated threat detection.
Protection for search-augmented generation. Add security layer to Perplexity AI API calls with transparent proxy routing.
Secure Mistral 3, Mistral Large 3, and all Mistral AI models through proxy routing.
Compatible with Llama, Mixtral, and all Groq-hosted models. Route Groq LPU requests through LockLLM for real-time security scanning.
Add security layer to DeepSeek V3.2 and all DeepSeek models with transparent proxy integration.
Protection for open-source model deployments. Secure Together AI platform requests with transparent proxy routing.
Route xAI Grok 4 API requests through LockLLM proxy for real-time security scanning.
Security for serverless platform. Add transparent proxy routing to Fireworks AI for prompt injection protection.
Protection for Ray-powered LLM serving infrastructure. Secure Anyscale Endpoints with transparent proxy routing.
Security for thousands of open-source models. Route Hugging Face Inference API calls through LockLLM proxy.
Protection for Claude, Llama, and AWS-hosted models. Secure Amazon Bedrock foundation models with transparent proxy routing.
Security for PaLM, Gemini, and custom models on GCP. Protect Google Cloud Vertex AI deployments with transparent proxy routing.
SDKs
Python SDKComing Soon
Native Python library in development. Idiomatic integration for Python applications with zero-configuration prompt security.
JavaScript SDKComing Soon
Native JavaScript/TypeScript SDK in development. First-class support for Node.js and browser environments.
Go SDKComing Soon
Native Go library in development. Zero-dependency prompt security for Go applications with clean API design.
Tools
Web-based interface for manual prompt scanning. Upload files, test prompts, and view detailed threat analysis with confidence scores.
Real-time notifications for security events. Configure custom webhooks for Slack, Discord, or any HTTP endpoint with flexible payload formats.
Integration FAQ
Frequently asked questions
- How do I integrate LockLLM into my application?
- You can integrate LockLLM via our REST API, use the reverse proxy to route requests through supported AI SDKs, install our Chrome extension for browser-based protection, or wait for our upcoming SDK for native language integrations.
- What is the LockLLM reverse proxy?
- The reverse proxy allows you to use LockLLM security with existing AI SDKs like OpenAI and Anthropic by simply changing the base URL. All requests are scanned for prompt injection and jailbreaks before reaching the LLM provider.
- Does the Chrome extension work with ChatGPT and Claude?
- Yes. The LockLLM Chrome extension can scan prompts before you send them to web-based AI interfaces like ChatGPT, Claude, and other LLM applications. The security layer is provider-agnostic and scans prompts before they reach any model.
- Which AI providers does the reverse proxy support?
- The reverse proxy supports OpenAI, Anthropic, Google Gemini, Cohere, Azure OpenAI, OpenRouter, Perplexity, Mistral, Groq, DeepSeek, Together AI, xAI (Grok), Fireworks AI, Anyscale, Hugging Face, AWS Bedrock, Google Vertex AI, and other custom endpoints.
- When will the LockLLM SDK be available?
- The native LockLLM SDK is currently in development. It will provide first-class language support for Python, JavaScript/TypeScript, and other popular languages. Sign up to be notified when it launches.
- Do I need to change my existing code?
- With the reverse proxy, you only need to change your base URL. With the REST API, you add a security check before your LLM call. The Chrome extension requires no code changes at all.
- What is the latency overhead?
- The REST API typically adds less than 100ms. The reverse proxy adds around 50ms overhead. Both are optimized for production use with minimal impact on response times.
- Can I use multiple integrations together?
- Yes. You can use the Chrome extension for everyday browsing protection, the API for production traffic, and the reverse proxy for backend services. All methods work with the same API key and security rules.
- How does authentication work?
- All integrations use your LockLLM API key for authentication. Get your key from the dashboard and include it in the Authorization header for the API or X-LockLLM-Key header for the proxy.
- What happens when a malicious prompt is detected?
- The direct API returns a 200 response with the scan results and threat details. The proxy blocks the request with a 400 error before it reaches the LLM provider. The Chrome extension shows a warning and lets you decide whether to proceed.
- Does the reverse proxy support streaming?
- Yes. The reverse proxy supports streaming responses and async requests. It scans the prompt first, then streams the LLM response normally if the prompt passes security checks.
- Can I test integrations before going live?
- Yes. Make test API calls with sample data and view the results in your dashboard activity logs. The dashboard shows all scan results, blocked threats, webhook deliveries, and request details so you can verify your integration is working correctly before production deployment.
Guardrails for every prompt
Secure your AI with confidence
Scan prompts manually in the dashboard, or protect live traffic with API keys that enforce safety checks in real time.
