LockLLM Changelog

The latest releases, security improvements, and platform updates for LockLLM. Follow along as we ship better protection, faster scanning, and stronger controls.

Chrome Extension - Scan Prompts in Your Browser

Chrome Extension - Scan Prompts in Your Browser

Introducing the LockLLM Chrome Extension - scan prompts and documents for prompt injection attacks directly in your browser before pasting into ChatGPT, Claude, Gemini, or any AI assistant.

The extension detects prompt injection, jailbreaks, system prompt leaks, role manipulation, agent abuse, RAG injection, and evasion techniques - all in your browser.

Key features:

  • Multiple scanning methods: Popup, right-click context menu, auto-scan on copy/paste, and file upload
  • Works everywhere: ChatGPT, Claude, Gemini, Copilot, and all AI tools
  • Privacy-first: Only scans when you actively use features, no browsing history or background monitoring
  • File upload support: Scan PDFs, text files, and code files before uploading to AI tools
  • Customizable: Sensitivity levels (high/medium/low) and notification preferences

Install from the Chrome Web Store, add your API key, and start scanning. Check out our Browser Extension documentation for setup guides and features.

Webhooks - Real-Time Security Alerts

Webhooks - Real-Time Security Alerts

We're launching Webhooks - get real-time notifications whenever LockLLM detects a malicious prompt in your application.

Webhooks send instant HTTP POST notifications when prompt injection attacks are detected. Instead of polling, LockLLM proactively alerts you the moment something malicious is found.

Key features:

  • Multiple formats: Raw JSON, Slack, or Discord
  • HTTPS-only with signature verification
  • Automatic retry logic (3 attempts)
  • Works with both API scans and Proxy Mode
  • Perfect for security incident response, monitoring dashboards, and team notifications

Check out our Webhooks documentation for setup instructions and integration guides.

Proxy Mode (BYOK) - Zero-Code Security

Proxy Mode (BYOK) - Zero-Code Security

We're excited to announce Proxy Mode (BYOK) - add prompt injection protection to your AI applications with zero code changes.

Simply add your provider API keys to the LockLLM dashboard, change your SDK base URL to https://api.lockllm.com/v1/proxy/{provider}, and all requests are automatically scanned before being forwarded to your provider.

Key highlights:

  • Zero code changes - just update your base URL
  • Works with official SDKs (OpenAI, Anthropic, and 17+ providers)
  • Bring Your Own Keys - your API keys stay with you
  • Automatic scanning - every request checked for prompt injection
  • Minimal latency (~150-250ms) with free unlimited usage

Check out our Proxy Mode documentation for setup instructions and provider configurations.

Weekly Update: LockLLM Enhancements

Weekly Update: LockLLM Enhancements

This week we shipped several improvements across LockLLM to make AI request scanning faster, clearer, and more reliable.

Key updates include refinements to the prompt-injection detection pipeline for stronger accuracy, performance optimizations that reduce scan latency, and UI tweaks that make it easier to review results and understand why something was flagged.

We’re continuing to listen to developer feedback and will keep shipping updates that help teams deploy safer AI features with less friction and fewer false positives.

  • <- Previous Page
  • Next Page ->
Guardrails for every prompt

Enable security effortlessly

Scan prompts in the dashboard, or secure live traffic for your AI models and agents.