Dashboard Guide

Comprehensive guide to the LockLLM dashboard - manage API keys, scan prompts, configure policies, set up routing, explore attack templates, monitor activity, and track billing.

Link to section: Dashboard OverviewDashboard Overview

The LockLLM dashboard is your central hub for managing every aspect of your AI security setup. Access it at https://www.lockllm.com/dashboard after signing in.

The dashboard is organized into six main sidebar sections, each containing related pages:

Dashboard

  • API Keys - Create and manage LockLLM API keys
  • Playground - Interactively scan prompts for threats

Analytics

  • Overview - Visual summary of API activity, costs, threats, and performance
  • Usage - Track your spend and credit transaction history
  • Logs - View scan history, webhook deliveries, and proxy request logs

Proxy

Red Team

  • Templates - Explore documented LLM attack techniques and test them against your setup

Settings

  • General - Account settings and account deletion
  • Webhooks - Configure real-time alert notifications
  • Custom Policies - Create and manage custom content policies

Other

Link to section: Top Navigation BarTop Navigation Bar

The top bar of the dashboard provides quick access to several global features:

  • Search - Opens a search dialog to quickly find and navigate to any dashboard page. Pages are organized by category (Core, Security, Settings, Developers) with descriptions.
  • Notifications - View in-app notifications for blocked requests, policy violations, webhook delivery failures, low credit balance warnings, tier progression updates, and system updates.
  • Need help? - Quick access to help resources and support.
  • Theme Toggle - Switch between light and dark mode.
  • Account Menu - Opens a dropdown showing your current workspace context (Personal Account or an organization). From here you can switch between your personal account and any organizations you belong to (with role badges showing Admin or Member), create a new organization, or sign out.

The sidebar can also be collapsed or expanded using the toggle button at the bottom of the sidebar, giving you more screen space when needed.

Link to section: Managing LockLLM API KeysManaging LockLLM API Keys

The API Keys page is the default dashboard landing page. Here you create and manage the API keys used to authenticate with LockLLM services.

Link to section: Creating an API KeyCreating an API Key

  1. Navigate to Dashboard > API Keys
  2. Click Add new key
  3. Enter a descriptive name (e.g., "Production API", "Development")
  4. Click Create
  5. Copy the key immediately - you will not be able to see it again
  6. Store it securely in your environment variables

Link to section: Viewing API KeysViewing API Keys

The API Keys table shows:

  • Name - The descriptive name you assigned
  • Last used - When the key was last used to make a request
  • Actions - Delete the key

Note: Full API keys are never displayed after creation for security. Only the key name and metadata are visible.

Link to section: Revoking API KeysRevoking API Keys

Delete API keys you no longer need:

  1. Find the key in your list
  2. Click the Delete action
  3. Confirm deletion
  4. The key stops working immediately

Important: Deleted keys cannot be recovered. Create a new key if needed.

Link to section: Searching KeysSearching Keys

Use the search bar at the top of the API Keys table to filter keys by name. This is useful when you have many keys across different projects or environments.

Link to section: Playground (Prompt Scanner)Playground (Prompt Scanner)

The Playground lets you interactively test prompts for security threats before deploying them in your application. It has two tabs: Scanner and API Examples.

Link to section: Scanner TabScanner Tab

The Scanner tab provides a hands-on interface to scan prompts:

  1. Navigate to Dashboard > Playground
  2. Enter your API key in the API key field (use the eye icon to show/hide it)
  3. In the Prompt to scan area, paste or type the text you want to test
  4. Configure scan options:
    • Sensitivity dropdown - Choose low, medium, or high detection sensitivity
    • Scan Mode dropdown - Choose normal (core security only), policy (custom policies only), or combined (both)
    • Replace / Append toggle - Control whether new text replaces or appends to existing content
  5. Optionally click Upload file to scan text from a file
  6. Click Scan to run the analysis

The scanner returns results showing whether the prompt is safe or malicious, along with confidence scores and any detected threat categories.

Link to section: API Examples TabAPI Examples Tab

The API Examples tab helps you configure and test your API integration. It contains two sub-tabs:

Link to section: Advanced SettingsAdvanced Settings

Configure how LockLLM processes your requests:

  • Scan Configuration

    • Scan Action - Choose Allow with Warning (let requests through with warnings) or Block (reject unsafe requests)
    • Policy Action - Same options for custom policy violations
  • Advanced Features

    • Abuse Detection toggle - Enable detection of bot-generated content, repetition patterns, and resource exhaustion attempts
    • Smart Routing toggle - Enable automatic routing of requests to optimal models based on task complexity
  • Current Settings - Summary of your active configuration (mode, sensitivity, scan action, policy action)

Link to section: Code ExamplesCode Examples

Ready-to-use code snippets in multiple languages for integrating LockLLM into your application:

  • cURL - Command-line examples
  • JavaScript - Node.js integration examples
  • Python - Python integration examples

Each example includes all necessary headers and can be copied with one click.

Link to section: Analytics OverviewAnalytics Overview

The Analytics Overview page provides a visual summary of your API activity, costs, threats, and performance metrics over a selected time period.

Link to section: Time RangeTime Range

Use the time range buttons in the top-right corner to filter all analytics data:

  • 24h - Last 24 hours
  • 7d - Last 7 days
  • 30d - Last 30 days

Link to section: Cost SummaryCost Summary

The top row of cards shows your spending breakdown:

  • Total Spend - Total credits spent across all operations
  • Scan Fees - Credits spent on scan detections
  • Proxy Usage - Credits spent on LLM proxy requests
  • Total Savings - Money saved through smart routing and compression

Link to section: Activity SummaryActivity Summary

The next row provides key activity metrics:

  • Total Requests - Total number of API requests, with a breakdown by provider (e.g., OpenAI, Anthropic, Perplexity)
  • Total Tokens - Total tokens processed, split into input and output tokens
  • Threats Blocked - Number of threats detected, with percentage rate and breakdown by type (e.g., Injection, Policy Violation)

Link to section: Savings ChartsSavings Charts

Two charts visualize your cost savings over the selected time period:

  • Compression Savings - Token savings from prompt compression
  • Routing Savings - Cost savings from smart routing to cheaper models

Link to section: Performance MetricsPerformance Metrics

Additional cards track operational performance:

  • Avg Latency - Average response time for proxy requests
  • Cache Hit Rate - Percentage of requests served from cache
  • BYOK Requests - Number of requests using your own provider API keys

Link to section: Activity Over TimeActivity Over Time

A line chart shows request volume and threat detections plotted over time, helping you spot trends and anomalies.

Link to section: UsageUsage

The Usage page shows your complete credit transaction history, helping you track every spend and credit event.

Link to section: Transaction HistoryTransaction History

The page displays a table of all credit transactions with the following columns:

  • Date - When the transaction occurred
  • Type - Transaction category (e.g., purchase, deduction, refund, adjustment)
  • Description - Details about what the transaction was for
  • Amount - Credit amount added or deducted
  • Balance - Your credit balance after the transaction

Link to section: Filtering TransactionsFiltering Transactions

Use the filter controls at the top of the table to narrow results:

  • Date range - Filter transactions by a specific date range
  • Type - Filter by transaction type (e.g., all types, purchases, deductions)

Link to section: Viewing Activity LogsViewing Activity Logs

The Activity Logs page shows a complete history of all your scan requests, webhook deliveries, and API key events.

Link to section: Filtering LogsFiltering Logs

Use the controls at the top of the page to narrow down your log entries:

  • Search - Search by request ID, URL, or error message
  • Filters - Click the Filters button to access filtering options including log type (Scan API, Proxy Request, Webhook Delivery), status (Success, Failure, Error), and date range
  • Export - Export your filtered log data for external analysis

Link to section: Log InformationLog Information

Each log entry shows:

  • Timestamp - When the event occurred
  • Type - Scan API, Proxy Request, or Webhook Delivery
  • Status - Success, Failure, or Error
  • Scan Result - Safe/malicious with confidence score
  • Policy Violations - Which custom policies were triggered
  • Provider (proxy only) - Which AI provider was used
  • Model (proxy only) - Which model was called or routed to
  • Routing Info (proxy only) - Task type, complexity, and routing decisions
  • Credits Used - Detection fees, routing fees, and LLM usage costs
  • Request ID - Unique identifier for tracking

Privacy: Logs contain metadata only. Your prompts are never stored or logged.

Link to section: Log RetentionLog Retention

Logs are retained for 30 days then automatically deleted. This provides a reasonable audit trail while respecting your privacy.

Link to section: Supported ModelsSupported Models

The Supported Models page lets you browse all available AI models when using LockLLM's proxy in non-BYOK mode.

Link to section: Browsing ModelsBrowsing Models

  1. Navigate to Proxy > Supported Models
  2. Use the search bar to filter models by name or model ID
  3. The table displays:
    • Model Name - The display name of the model
    • Model ID - The identifier to use in API requests
    • Input Pricing - Cost per 1M input tokens
    • Output Pricing - Cost per 1M output tokens

Link to section: Free Model PricingFree Model Pricing

Models with $0 pricing are charged a minimum of $0.01 per 1M input tokens and $0.01 per 1M output tokens to prevent abuse. For free models, it is recommended to configure your own OpenRouter API key via BYOK settings.

Link to section: Managing Provider API Keys (BYOK)Managing Provider API Keys (BYOK)

The Proxy (BYOK) page lets you add your own provider API keys to use with LockLLM's proxy. BYOK (Bring Your Own Key) users pay their AI provider directly and are not charged for LLM usage through LockLLM credits.

Link to section: Adding a Provider KeyAdding a Provider Key

  1. Navigate to Proxy > Proxy (BYOK)
  2. Click Add API Key
  3. Select your provider (OpenAI, Anthropic, Gemini, Cohere, OpenRouter, Perplexity, Mistral, Groq, Azure OpenAI, and more)
  4. Enter your provider's API key
  5. Add a nickname (optional, helpful for multiple keys per provider)
  6. For Azure OpenAI: Enter endpoint URL, deployment name, and API version
  7. Click Save

Your provider API key is encrypted and securely stored.

Link to section: How to Use the ProxyHow to Use the Proxy

After adding your provider key, change your SDK's base URL to route through LockLLM. The page displays provider-specific base URLs:

  • OpenAI: https://api.lockllm.com/v1/proxy/openai
  • Anthropic: https://api.lockllm.com/v1/proxy/anthropic
  • Gemini: https://api.lockllm.com/v1/proxy/gemini
  • Cohere: https://api.lockllm.com/v1/proxy/cohere
  • OpenRouter: https://api.lockllm.com/v1/proxy/openrouter
  • Perplexity: https://api.lockllm.com/v1/proxy/perplexity
  • Mistral: https://api.lockllm.com/v1/proxy/mistral
  • Groq: https://api.lockllm.com/v1/proxy/groq
  • Azure OpenAI: https://api.lockllm.com/v1/proxy/azure

Additional providers (DeepSeek, Together AI, xAI, Fireworks, Anyscale, Hugging Face, AWS Bedrock, Google Vertex AI) are also supported. Custom endpoints are available for providers not listed here.

Important: When using the proxy (BYOK), authenticate with your LockLLM API key using the Authorization: Bearer header instead of your provider's API key.

Link to section: Viewing Provider KeysViewing Provider Keys

See all configured provider keys with:

  • Provider name
  • Nickname
  • Endpoint URL (for Azure/custom endpoints)
  • Last used timestamp
  • Enable/disable toggle
  • Delete option

Link to section: Enable/Disable Provider KeysEnable/Disable Provider Keys

Temporarily disable keys without deleting:

  1. Find your key in Proxy (BYOK)
  2. Click the toggle switch
  3. Disabled keys will not be used for proxy requests

Link to section: Deleting Provider KeysDeleting Provider Keys

Permanently remove provider keys:

  1. Find the key in the list
  2. Click Delete
  3. Confirm deletion
  4. The key is permanently removed

Link to section: Need Help?Need Help?

If you cannot find your provider or need assistance, email [email protected].

Link to section: Managing Custom Routing RulesManaging Custom Routing Rules

The Custom Routing page lets you configure smart model routing rules. Routing automatically selects the optimal AI model based on the task type and complexity of each request, helping you optimize cost and quality.

Link to section: How Routing WorksHow Routing Works

Routing mode is controlled by the x-lockllm-route-action header in your API requests:

  • disabled (default) - No routing, use the original model you specified
  • auto - Automatic routing based on AI-powered task classification and complexity analysis
  • custom - Use your custom routing rules defined in the dashboard

Link to section: Creating a Routing RuleCreating a Routing Rule

  1. Navigate to Proxy > Custom Routing
  2. Click Add Rule
  3. Select Task Type (Code Generation, Summarization, Chatbot, etc.)
  4. Select Complexity Tier (low, medium, high)
  5. Choose Target Model (e.g., claude-sonnet-4-6, gpt-4)
  6. Select Provider Preference
  7. Choose Use BYOK (yes/no):
    • Yes: Uses your provider API key (no LLM usage charge from LockLLM)
    • No: Uses LockLLM credits
  8. Click Save

Link to section: Supported Task TypesSupported Task Types

  • Open QA - Open-ended questions
  • Closed QA - Factual questions with specific answers
  • Summarization - Content condensing
  • Text Generation - Creative writing
  • Code Generation - Programming tasks
  • Chatbot - Conversational interactions
  • Classification - Categorization tasks
  • Rewrite - Content editing
  • Brainstorming - Idea generation
  • Extraction - Information extraction
  • Other - Uncategorized tasks

Link to section: Complexity TiersComplexity Tiers

Prompts are analyzed and assigned complexity:

  • Low (0-0.4): Simple, straightforward tasks
  • Medium (0.4-0.7): Moderate complexity
  • High (0.7-1.0): Complex, nuanced tasks

Link to section: Routing Rule PriorityRouting Rule Priority

When multiple rules could apply:

  1. Custom rules (your dashboard rules) take priority
  2. Falls back to auto-routing if no custom rule matches
  3. Falls back to original model if routing fails

Link to section: Enabling RoutingEnabling Routing

Use routing in proxy mode with headers:

const openai = new OpenAI({
  apiKey: process.env.LOCKLLM_API_KEY,
  baseURL: 'https://api.lockllm.com/v1/proxy/openai',
  defaultHeaders: {
    'X-LockLLM-Route-Action': 'custom'  // Use custom routing rules
    // Or 'auto' for automatic routing
    // Or 'disabled' to disable routing
  }
})

Link to section: Routing FeesRouting Fees

You only pay when routing saves you money:

  • Routing to cheaper model: 5% of cost savings
  • Routing to same/more expensive model: FREE
  • Routing disabled: FREE

Learn more about Smart Routing

Link to section: Red Team TemplatesRed Team Templates

The Templates page provides a library of documented LLM attack techniques organized by category. This is a security research tool that helps you understand known vulnerabilities and test your defenses.

Link to section: Browsing TemplatesBrowsing Templates

  1. Navigate to Red Team > Templates
  2. Use the search bar to find specific attacks by name
  3. Use the category filter dropdown to filter by attack type

Link to section: Attack CategoriesAttack Categories

Templates are organized into the following categories:

  • Jailbreak Attacks - Techniques that attempt to bypass AI safety guardrails (e.g., Persuasive Adversarial Prompting, DAN Evolution, Controlled-Release Prompting)
  • System Prompt Extraction - Methods to leak system prompts and internal instructions
  • Instruction Override - Attacks that attempt to override the AI's original instructions
  • RAG Poisoning - Techniques to inject malicious content through retrieval-augmented generation pipelines
  • Data Exfiltration - Methods to extract sensitive data from AI systems
  • Multi-Turn Attacks - Sophisticated attacks that unfold across multiple conversation turns
  • Context Manipulation - Techniques that exploit context window behavior
  • Obfuscation & Encoding - Attacks that use encoding, unicode tricks, or visual obfuscation to evade detection
  • Tool/Function Abuse - Exploits targeting AI tool-use capabilities and agent frameworks
  • Prompt Injection - Direct and indirect prompt injection techniques

Link to section: Using TemplatesUsing Templates

Each template card displays:

  • Attack name - The name of the documented technique
  • Effectiveness score - A percentage indicating the reported effectiveness against unprotected AI models
  • Expand button (+) - Click to view the full template details

You can use these templates to:

  • Test your LockLLM configuration against known attack patterns
  • Understand the threat landscape for LLM applications
  • Validate that your custom policies catch specific attack types
  • Educate your security team on emerging AI threats

Link to section: Managing Custom PoliciesManaging Custom Policies

The Custom Policies page lets you create your own content rules that extend LockLLM's built-in AI safety protections. Custom policies allow you to enforce compliance requirements, brand guidelines, or industry-specific restrictions.

Link to section: Creating a Custom PolicyCreating a Custom Policy

  1. Navigate to Settings > Custom Policies
  2. Click Create Policy
  3. Enter a policy name (e.g., "No Medical Advice")
  4. Write a detailed description (up to 10,000 characters):
    • Be specific about what should be blocked
    • Include examples of violations
    • Clarify what is allowed vs. blocked
  5. Click Save
  6. Enable the policy (toggle switch)

Example Policy:

Name: Professional Boundaries

Description:
Block requests asking for:
- Medical diagnoses or treatment recommendations
- Prescription medication advice
- Interpretation of lab results or imaging
- Legal case interpretation or advice
- Financial investment recommendations
- Tax preparation guidance

Allow general information about health, law, or finance
without specific personal advice.

Link to section: Browse TemplatesBrowse Templates

The Custom Policies page includes a Browse Templates button that opens a library of pre-built policy templates. These provide ready-made policies for common use cases that you can adopt directly or customize.

Link to section: Viewing Custom PoliciesViewing Custom Policies

Your policies list shows:

  • Policy name
  • Description preview
  • Enabled/disabled status
  • Created date
  • Edit and delete options

Link to section: Editing PoliciesEditing Policies

Update existing policies:

  1. Find the policy in your list
  2. Click Edit
  3. Update name or description
  4. Click Save

Changes take effect immediately for new scans.

Link to section: Enabling/Disabling PoliciesEnabling/Disabling Policies

Toggle policies without deleting them:

  1. Find the policy
  2. Click the toggle switch
  3. Disabled policies are not checked during scans

Link to section: Deleting PoliciesDeleting Policies

Permanently remove policies:

  1. Find the policy
  2. Click Delete
  3. Confirm deletion

Note: Deleted policies cannot be recovered. Disable instead if you might need them later.

Link to section: Using Custom PoliciesUsing Custom Policies

After creating policies, use them in scans:

Direct API:

{
  "input": "Your text",
  "mode": "combined"  // Checks both security + custom policies
}

Proxy Mode:

const openai = new OpenAI({
  apiKey: process.env.LOCKLLM_API_KEY,
  baseURL: 'https://api.lockllm.com/v1/proxy/openai',
  defaultHeaders: {
    'X-LockLLM-Policy-Action': 'block'  // Block policy violations
  }
})

Learn more about Custom Policies

Link to section: Setting Up WebhooksSetting Up Webhooks

The Webhooks page lets you configure real-time notifications when security events occur.

Link to section: Creating a WebhookCreating a Webhook

  1. Navigate to Settings > Webhooks
  2. Click Add webhook
  3. Enter your webhook URL (must be HTTPS)
  4. Select format:
    • Raw JSON - Complete data for custom processing
    • Slack - Pre-formatted for Slack incoming webhooks
    • Discord - Pre-formatted for Discord webhooks
  5. Optionally add a secret for signature verification
  6. Optionally add a custom message
  7. Click Save

Learn more about Webhooks

Link to section: Testing WebhooksTesting Webhooks

Test webhook delivery before using in production:

  1. Find your webhook in the list
  2. Click Test
  3. LockLLM sends a test payload to your URL
  4. Check your endpoint receives the payload
  5. Verify it responds with 200 OK

Link to section: Managing WebhooksManaging Webhooks

View all configured webhooks with:

  • Webhook URL
  • Format (Raw/Slack/Discord)
  • Enabled status
  • Created date

Actions:

  • Edit - Update URL, format, or secret
  • Test - Send test payload
  • Enable/Disable - Toggle without deleting
  • Delete - Permanently remove

Link to section: General SettingsGeneral Settings

The General page manages your account settings.

Link to section: Your EmailYour Email

Displays your email address associated with your LockLLM account.

Link to section: Delete Personal AccountDelete Personal Account

Permanently remove your personal account and all of its contents from LockLLM. This action is not reversible and will delete:

  • All API keys
  • All provider keys (BYOK)
  • All custom policies
  • All routing rules
  • All activity logs
  • Your credit balance

Warning: This cannot be undone. Proceed with caution.

Link to section: Billing and CreditsBilling and Credits

The Billing page shows your credit balance, current tier, and spending progress. It is organized into two tabs: Overview and Tiers.

Link to section: Overview TabOverview Tab

The Overview tab displays your current billing status:

  • Credit balance - Your current balance in USD under the "Pay as you go" heading
  • Add to credit balance button - Purchase additional credits
  • Current Tier card - Shows your tier name and number (e.g., Starter, Tier 1), rate limit, monthly reward, and progress toward the next tier with a progress bar
  • An info note: "All usage spending counts toward tier progress"

Link to section: Tiers TabTiers Tab

The Tiers tab shows the full tier table with all 10 tiers, including tier number, name, monthly spending requirement, rate limit, and monthly reward. A note at the bottom explains that tiers are automatically evaluated on the 1st of each month based on your previous month's spending.

Link to section: Understanding ChargesUnderstanding Charges

LockLLM uses pay-per-detection pricing:

Detection Fees (only charged when threats found):

  • Safe prompts: FREE
  • Unsafe core scan: $0.0001
  • Policy violation: $0.0001
  • Both unsafe: $0.0002

Compression Fees (opt-in):

  • TOON (JSON compression): FREE
  • Compact (any text): $0.0001 per use

Routing Fees (only when saving money):

  • Routing to cheaper model: 5% of cost savings
  • Routing to same/more expensive model: FREE
  • Routing disabled: FREE

LLM Usage:

  • BYOK (Bring Your Own Key): FREE (you pay provider directly)
  • Non-BYOK (universal endpoint): Variable via LockLLM credits

Link to section: Tier SystemTier System

LockLLM features a 10-tier progressive system based on monthly spending:

TierMonthly SpendingFree CreditsMax RPM
1$0$0/month300
2$10$0.50/month500
3$50$2/month1,000
4$100$5/month2,000
5$250$15/month5,000
6$500$40/month10,000
7$1,000$80/month20,000
8$3,000$250/month50,000
9$5,000$500/month100,000
10$10,000$1,000/month200,000

View full tier benefits

Link to section: Monthly Tier EvaluationMonthly Tier Evaluation

Tiers are automatically evaluated on the 1st of each month:

  • Spending calculation - Based on actual credits deducted (not purchases)
  • Tier adjustment - Upgrade, maintain, or downgrade based on spending
  • Credit distribution - Free tier credits awarded to eligible users
  • Reset - Monthly spending resets to $0 for new month

Link to section: Cost Optimization TipsCost Optimization Tips

1. Use the Universal Endpoint (Non-BYOK):

  • Same LLM costs as BYOK (no surcharge)
  • Free tier credits offset your total spending
  • Access 200+ models with a single LockLLM API key

2. Enable Smart Routing:

  • Automatically routes simple tasks to cheaper models
  • Only pay 5% fee on actual cost savings
  • Save 60-80% on routine operations

3. Block Malicious Requests Early:

  • Each blocked request saves LLM API costs
  • Pay $0.0001-$0.0002 detection fee instead of full LLM cost

4. Leverage Free Tier Credits:

  • Higher tiers unlock more free monthly credits
  • Regular usage automatically unlocks higher tiers

5. Use Prompt Compression:

  • TOON compression is free for JSON data (30-60% token savings)
  • Compact compression costs $0.0001 per use for any text (30-70% token savings)
  • Savings increase with longer prompts and more expensive models

6. Monitor Your Dashboard:

Link to section: OrganizationsOrganizations

LockLLM supports organizations for team collaboration. Users can create organizations to share resources with team members.

Link to section: Creating an OrganizationCreating an Organization

  1. Open the Search dialog or navigate to Create Organization
  2. Enter your organization name (3-100 characters)
  3. Confirm creation
  4. You will be switched to the organization context

Link to section: Shared ResourcesShared Resources

Organizations share:

  • Credit balances (separate from personal balance)
  • Custom content policies
  • Routing rules
  • BYOK API keys
  • Activity logs

Link to section: Switching ContextsSwitching Contexts

Switch between personal and organization contexts via:

  • The Account dropdown in the top navigation bar
  • Select your personal account or an organization

All dashboard pages automatically adjust to show data for the active context.

Link to section: Organization RolesOrganization Roles

  • Admin - Full access to create, edit, and delete organization resources
  • Member - Read-only access to organization resources

Link to section: RoadmapRoadmap

The Roadmap page shows what features have been completed, what is currently being worked on, and what is planned for future quarters. Each item includes a title and description.

Roadmap items are grouped by quarter with status labels:

  • Completed - Features that have been shipped and are available now
  • Working on - Features currently in active development
  • Planned - Features scheduled for future development

Have a feature request or need to prioritize a feature? Reach out to [email protected].

Link to section: Tips & TricksTips & Tricks

Link to section: Organizing API KeysOrganizing API Keys

Use descriptive names for easy identification:

  • "Production - Main App"
  • "Development - Local Testing"
  • "CI/CD Pipeline"
  • Avoid: "Key 1", "Test", "New Key"

Link to section: Monitoring Security EventsMonitoring Security Events

Check your activity logs regularly for:

  • Unusual patterns in blocked requests
  • Sudden increases in detected threats
  • Failed webhook deliveries
  • Error trends

Link to section: Using Request IDsUsing Request IDs

Request IDs help you:

  • Trace requests from logs to your application
  • Debug issues with specific scans
  • Correlate webhook events with API calls
  • Provide support with specific examples

Link to section: Provider Key ManagementProvider Key Management

Best practices for provider keys:

  • Use different keys for different environments
  • Add nicknames for easy identification
  • Rotate keys regularly
  • Disable unused keys instead of deleting
  • Test new keys before using in production

Link to section: Red Team Best PracticesRed Team Best Practices

Use the Templates page to:

  • Regularly test your setup against new attack techniques
  • Verify custom policies catch the threats relevant to your use case
  • Keep up with the evolving AI threat landscape
  • Train your team on common attack patterns

Link to section: TroubleshootingTroubleshooting

Link to section: Can't Create API KeyCan't Create API Key

Problem: Create button does not work or returns an error.

Solution:

  1. Refresh the page
  2. Try a different browser
  3. Clear browser cache
  4. Contact support if issue persists

Link to section: Provider Key Not WorkingProvider Key Not Working

Problem: Proxy requests fail after adding provider key.

Solution:

  1. Verify the key is enabled (not disabled)
  2. Check the key is valid in your provider dashboard
  3. For Azure: Verify endpoint URL and deployment name
  4. Test the key directly with the provider first
  5. Check activity logs for specific error messages

Link to section: Logs Not ShowingLogs Not Showing

Problem: Activity logs appear empty or incomplete.

Solution:

  1. Check filter settings (remove all filters)
  2. Adjust date range to include recent activity
  3. Verify you are making API requests with your API key
  4. Logs may take a few seconds to appear

Link to section: Webhook Not Receiving EventsWebhook Not Receiving Events

Problem: No webhooks being delivered.

Solution:

  1. Verify webhook is enabled (not disabled)
  2. Test the webhook from the dashboard
  3. Check your endpoint is publicly accessible (HTTPS)
  4. Verify your endpoint returns 200 OK
  5. Check activity logs for webhook delivery attempts

Link to section: Scanner Returns No ResultsScanner Returns No Results

Problem: Scanning a prompt returns no output.

Solution:

  1. Verify your API key is entered correctly in the scanner
  2. Check that the prompt field is not empty
  3. Try a different scan mode (normal, policy, combined)
  4. Check your network connection

Link to section: FAQFAQ

Link to section: How do I create an API key?How do I create an API key?

Navigate to Dashboard > API Keys and click Add new key. Enter a name, click Create, and copy the key immediately - you will not see it again.

Link to section: How much does LockLLM cost?How much does LockLLM cost?

LockLLM uses pay-per-detection pricing:

  • Safe prompts: FREE (no charge)
  • Detected threats: $0.0001-$0.0002 per detection
  • Routing fees: 5% of cost savings (only when routing saves money)
  • BYOK LLM usage: FREE (you pay provider directly)

All users receive free monthly credits based on their tier. View your balance and tier in the Billing section.

Link to section: Where can I see my credit balance?Where can I see my credit balance?

Your credit balance and current tier are displayed on the Billing page, including a progress bar showing how close you are to the next tier.

Link to section: How do I add credits?How do I add credits?

Navigate to Billing and click Add to credit balance. Credits are added immediately after purchase.

Link to section: What is the tier system?What is the tier system?

LockLLM has 10 tiers based on monthly spending. Higher tiers unlock more free monthly credits and higher rate limits. Tiers are evaluated on the 1st of each month and automatically adjust based on your actual spending.

Link to section: Where can I see my scan history?Where can I see my scan history?

Go to Analytics > Logs to view all your scans, including direct API scans, proxy requests, webhook deliveries, scan results, policy violations, routing decisions, and credits used.

Link to section: What is the Analytics Overview page?What is the Analytics Overview page?

The Analytics Overview page provides a visual dashboard of your API activity, costs, threats blocked, performance metrics, and savings from routing and compression over selectable time periods (24h, 7d, 30d).

Link to section: Where can I see my transaction history?Where can I see my transaction history?

Go to Analytics > Usage to see a complete history of all credit transactions including purchases, deductions, refunds, and adjustments with running balance.

Link to section: How long are logs retained?How long are logs retained?

Logs are retained for 30 days then automatically deleted.

Link to section: What information is logged?What information is logged?

Only metadata is logged - request IDs, timestamps, scan results (scores only), policy violations (names, not content), provider and model info, credits used, and status. Your prompts are never stored or logged.

Link to section: How do I create a custom policy?How do I create a custom policy?

Navigate to Settings > Custom Policies and click Create Policy. Enter a name and description (up to 10,000 characters), then save and enable it. You can also use Browse Templates for pre-built policy starting points.

Link to section: How do I set up smart routing?How do I set up smart routing?

Navigate to Proxy > Custom Routing and click Add Rule. Select task type, complexity tier, target model, and provider. Then use the X-LockLLM-Route-Action: custom header in proxy mode to activate your rules.

Link to section: How do I add a provider API key for proxy mode (BYOK)?How do I add a provider API key for proxy mode (BYOK)?

Go to Proxy > Proxy (BYOK) and click Add API Key. Select provider, enter your API key, optionally add a nickname, and save.

Link to section: Can I have multiple keys for the same provider?Can I have multiple keys for the same provider?

Yes. Add multiple keys with different nicknames (e.g., "Production" and "Development"). Useful for testing, key rotation, and multi-environment setups.

Link to section: What are the Red Team Templates?What are the Red Team Templates?

The Templates page is a library of documented LLM attack techniques organized by category (Jailbreak, Prompt Injection, Data Exfiltration, etc.). Each template shows the attack name and an effectiveness score. Use them to test your security configuration and understand the threat landscape.

Link to section: How do organizations work?How do organizations work?

Create an organization to share resources with your team. Organizations have separate credit balances, policies, routing rules, and BYOK keys from your personal account. Admins can manage all resources while members have read-only access. Switch between personal and organization contexts via the account dropdown.

Link to section: How do I switch between personal and organization contexts?How do I switch between personal and organization contexts?

Use the Account dropdown in the top navigation bar to switch. All dashboard pages automatically update to show data for the active context.

Link to section: How do tier credits work?How do tier credits work?

Each tier provides free monthly credits awarded on the 1st of each month. For example, Tier 3 users get $2 free credits every month, while Tier 10 users get $1,000/month.

Link to section: How do I upgrade my tier?How do I upgrade my tier?

Tiers upgrade automatically based on your actual monthly spending (credits deducted, not purchases). Spend more during the month and you advance to a higher tier the next month. No manual action needed.

Updated 9 days ago