LockLLM Changelog

The latest releases, security improvements, and platform updates for LockLLM. Follow along as we ship better protection, faster scanning, and stronger controls.

Policies, PII Detection & Response Caching

Policies, PII Detection & Response Caching

Three updates to help you set up faster, keep sensitive data safe, and save on costs.

Policy templates make it easy to get started with custom content policies. Instead of writing descriptions from scratch, you can now browse pre-built templates and add them with one click. Pick the ones that fit your use case, customize if needed, and you're good to go.

PII detection lets you catch personally identifiable information - names, emails, phone numbers, SSNs, credit card numbers, and more - before it ever reaches your AI provider. You choose what happens when PII is found: get a warning, block the request entirely, or automatically strip it out and replace it with placeholders. Just add the PII action header to your scan or proxy requests to enable it.

Response caching is now built into proxy mode. Repeated identical requests are served from cache automatically, which means faster responses and lower API costs. It's enabled by default for non-streaming requests.

What's new:

  • Policy templates - Pre-built content policies you can add and customize in one click
  • PII detection and redaction - Detect sensitive data in prompts with options to warn, block, or strip it automatically
  • Response caching - Automatic caching for proxy requests to reduce latency and save on API costs

Browse templates from the Policies page in your dashboard, and check out the API reference for PII setup details.

Organizations - Team Collaboration

Organizations - Team Collaboration

LockLLM now supports Organizations for teams that want to manage AI security together.

Create an organization and invite your team to share a single workspace with its own credit balance, custom policies, routing rules, and BYOK API keys. Everyone on the team can see what's happening through shared activity logs, while admins control who can change what.

Switching between your personal account and an organization is seamless - just pick the context from your profile dropdown, and everything updates to show the right resources.

Key features:

  • Shared workspace - One credit balance, one set of policies, and one set of routing rules for the whole team
  • BYOK key sharing - Add provider API keys once and let the entire organization use them
  • Role-based access - Admins manage resources, members get read-only access
  • Context switching - Seamlessly switch between your personal account and any organization from the profile dropdown
  • Unified activity logs - See all team member actions in one place

Head to Create Organization in your dashboard to get started.

Smart Routing & Tier Rewards

Smart Routing & Tier Rewards

We're rolling out Smart Routing and a tier reward system to help you get more out of every request.

Smart Routing analyzes your prompts and automatically picks the best model based on task type and complexity. Simple tasks get routed to efficient models, complex ones go to the heavy hitters - so you're not overpaying for straightforward work. You can let LockLLM handle it automatically, or define your own routing rules per task type from the dashboard.

On top of that, your usage now earns you rewards. Our 10-tier system tracks your monthly spending and upgrades your tier automatically at the start of each month. Higher tiers unlock free monthly credits and increased rate limits, so the more you use LockLLM, the more you get back.

What's new:

  • Auto routing - LockLLM picks the optimal model for each request based on task type and complexity
  • Custom routing rules - Define your own model preferences per task type and complexity tier
  • 10 reward tiers - Earn free monthly credits and higher rate limits based on your usage
  • Automatic tier evaluation - Tiers update on the 1st of each month based on your previous month's spending

Visit the Pricing page for the full tier breakdown and check out the Proxy documentation for routing setup details.

Chrome Extension - Scan Prompts in Your Browser

Chrome Extension - Scan Prompts in Your Browser

Introducing the LockLLM Chrome Extension - scan prompts and documents for prompt injection attacks directly in your browser before pasting into ChatGPT, Claude, Gemini, or any AI assistant.

The extension detects prompt injection, jailbreaks, system prompt leaks, role manipulation, agent abuse, RAG injection, and evasion techniques - all in your browser.

Key features:

  • Multiple scanning methods: Popup, right-click context menu, auto-scan on copy/paste, and file upload
  • Works everywhere: ChatGPT, Claude, Gemini, Copilot, and all AI tools
  • Privacy-first: Only scans when you actively use features, no browsing history or background monitoring
  • File upload support: Scan PDFs, text files, and code files before uploading to AI tools
  • Customizable: Sensitivity levels (high/medium/low) and notification preferences

Install from the Chrome Web Store, add your API key, and start scanning. Check out our Browser Extension documentation for setup guides and features.

Webhooks - Real-Time Security Alerts

Webhooks - Real-Time Security Alerts

We're launching Webhooks - get real-time notifications whenever LockLLM detects a malicious prompt in your application.

Webhooks send instant HTTP POST notifications when prompt injection attacks are detected. Instead of polling, LockLLM proactively alerts you the moment something malicious is found.

Key features:

  • Multiple formats: Raw JSON, Slack, or Discord
  • HTTPS-only with signature verification
  • Automatic retry logic (3 attempts)
  • Works with both API scans and Proxy Mode
  • Perfect for security incident response, monitoring dashboards, and team notifications

Check out our Webhooks documentation for setup instructions and integration guides.

Proxy Mode (BYOK) - Zero-Code Security

Proxy Mode (BYOK) - Zero-Code Security

We're excited to announce Proxy Mode (BYOK) - add prompt injection protection to your AI applications with zero code changes.

Simply add your provider API keys to the LockLLM dashboard, change your SDK base URL to https://api.lockllm.com/v1/proxy/{provider}, and all requests are automatically scanned before being forwarded to your provider.

Key highlights:

  • Zero code changes - just update your base URL
  • Works with official SDKs (OpenAI, Anthropic, and all major providers)
  • Bring Your Own Keys - your API keys stay with you
  • Automatic scanning - every request checked for prompt injection
  • Minimal latency (~150-250ms) with free unlimited usage

Check out our Proxy Mode documentation for setup instructions and provider configurations.

Weekly Update: LockLLM Enhancements

Weekly Update: LockLLM Enhancements

This week we shipped several improvements across LockLLM to make AI request scanning faster, clearer, and more reliable.

Key updates include refinements to the prompt-injection detection pipeline for stronger accuracy, performance optimizations that reduce scan latency, and UI tweaks that make it easier to review results and understand why something was flagged.

We’re continuing to listen to developer feedback and will keep shipping updates that help teams deploy safer AI features with less friction and fewer false positives.

  • <- Previous Page
  • Next Page ->
Guardrails for every prompt

Enable security effortlessly

Scan prompts in the dashboard, or secure live traffic for your AI models and agents.