Pay Only When We Add Value
Unlimited scanning with no upfront costs. Safe scans are always free. Only pay when we detect threats, policy violations, or optimizes your AI costs. Plus, earn free monthly credits as you grow.
Tier Rewards System
Earn rewards automatically based on your monthly usage. Higher tiers unlock increased rate limits and free monthly credits.
| Tier | Name | Monthly Spend | Rate Limit | Monthly Reward |
|---|---|---|---|---|
1 | Starter | Free | 30 RPM | Signup Bonus |
2 | Bronze | $10+ | 50 RPM | $0.50 |
3 | Silver | $50+ | 100 RPM | $2.00 |
4 | Gold | $100+ | 200 RPM | $5.00 |
5 | Platinum | $250+ | 500 RPM | $15.00 |
6 | Diamond | $500+ | 1,000 RPM | $40.00 |
7 | Emerald | $1,000+ | 2,000 RPM | $80.00 |
8 | Ruby | $3,000+ | 5,000 RPM | $250.00 |
9 | Sapphire | $5,000+ | 10,000 RPM | $500.00 |
10 | ObsidianHighest | $10,000+ | 20,000 RPM | $1000.00 |
Tiers are automatically assigned on the 1st of each month based on your previous month's spending
Stop overpaying for AI security
LockLLM helps individuals and teams secure LLM applications without heavyweight platforms or long-term contracts.
Built for teams shipping real AI products, not experiments.
Detect prompt injection, jailbreaks, and instruction overrides before execution.
Scan prompts manually in the dashboard or enforce checks via API keys.
Clear, consistent detection results that are easy to interpret.
Freemium pricing with usage-based rewards and free credits.
Actively maintained as new attack patterns emerge.
What teams say about LockLLM
From individual developers to small teams, LockLLM is used to secure LLM applications without slowing development.

We added LockLLM in front of our LLM endpoint and immediately caught prompts we wouldn’t have noticed in review.

The dashboard is great for quick checks, but the API is what really helped us enforce guardrails in production.

It’s hard to reason about prompt risks at scale. LockLLM gave us a simple signal we could actually act on.

We didn’t want to retrain models or build custom rules. Dropping in LockLLM was the fastest path to baseline safety.

Other security tools felt heavy. LockLLM stays out of the way and does one job well.

The manual scanner makes it easy to check suspicious prompts without wiring anything into our system.

We use the API key flow to protect multiple environments without changing our application code.

The results are straightforward to interpret. We could quickly tell when a prompt needed attention without digging into logs.

LockLLM helped us treat prompt security as infrastructure, not an afterthought.
Everything you need to know
- What is LockLLM?
- LockLLM is a complete AI security platform for LLM applications. It detects prompt injection, hidden instructions, and unsafe inputs, while providing custom content policies for moderation, intelligent routing for cost optimization, and abuse protection.
- Do I need to retrain my model?
- No. LockLLM works independently of your AI model/application and does not require fine-tuning, retraining, or changes to your prompts at all whatsoever.
- Do you have a free tier?
- Yes. Safe scans are always free. You earn free monthly credits based on usage. Only pay small fees when threats are detected or when intelligent routing saves you money.
- Does LockLLM offer free credits?
- Yes. You get free credits when you sign up. Active users also automatically earn additional free monthly credits based on their usage levels.
- Is this meant for developers or non-technical users?
- Both. Developers can integrate LockLLM via API, while non-technical users can use the dashboard to scan and review prompts manually.
- How do I use LockLLM?
- You can scan prompts manually in the dashboard, or protect live traffic by routing requests through the LockLLM API using an API key.
- Can I control or customize detection behavior?
- Yes. You can create custom content policies to extend the built-in safety categories, configure scan actions (allow/block/warn), enable intelligent routing for cost optimization, and adjust sensitivity per request. Chunking is available for long documents.
- What should I scan besides user prompts?
- Scan retrieved context (RAG snippets, webpages, knowledge-base text), file content after extraction, and tool inputs and outputs. These are common sources of indirect prompt injection.
- Are there rate limits?
- Yes. Limits are set high and are mostly there to prevent abuse and DDoS. It rarely affects normal usage. In the event you exceed your limit, you will receive HTTP 429 with a Retry-After header. Wait for the specified time, then retry.
- What happens when a scan is unsafe?
- Treat unsafe as a control point. Common actions are blocking, asking the user to rewrite, stripping instructions and keeping only facts, or routing to a restricted mode that cannot call tools.
- Do you store my scan text?
- LockLLM processes your text to return a result. We do not store, log, or train on any of your text input.
- Where should LockLLM sit in an agent pipeline?
- Place it at boundaries: before the model call, before tool execution, and when bringing in external context like RAG or web content. This helps prevent agent hijacking and tool abuse.
- Can I use LockLLM for files and documents?
- Yes. Extract text from PDFs, docs, logs, or tickets, then scan it the same way as prompts. For long documents, enable chunking for deeper coverage.
- What are custom content policies?
- Custom policies let you define your own content rules to prevent inappropriate AI output. Enforce content moderation guidelines alongside core injection detection to control what your AI generates.
- How does intelligent routing work?
- Intelligent routing analyzes task type and prompt complexity to automatically select the optimal model. Simple tasks route to efficient models, complex tasks to advanced ones. You pay 20% of cost savings when routing to cheaper models.
- What is BYOK and how does it work?
- BYOK (Bring Your Own Key) lets you use your own API keys for 17+ AI providers. You only pay LockLLM for scanning and routing fees, while AI usage goes through your own keys. This gives you full cost control.
- How do I earn free credits?
- You receive free credits when you sign up. Active users also automatically earn additional free monthly credits based on their usage. Credits and rate limits scale with activity, rewarding consistent users with more benefits.
- Do credits expire?
- Purchased credits never expire and remain in your account until used. Free tier credits earned through monthly rewards are provided as ongoing benefits as long as you maintain your tier level through active usage.
Secure your AI with confidence
Scan prompts manually in the dashboard, or protect live traffic with API keys that enforce safety checks in real time.