Security & Privacy

Learn how LockLLM protects your data and maintains your privacy while securing your AI applications.

Link to section: Data Privacy GuaranteeData Privacy Guarantee

Your privacy is our top priority. LockLLM is designed with privacy-first principles:

  • Prompts are NOT stored or logged
  • Your data is NOT used to train models
  • Only metadata is logged (scores, IDs, timestamps)
  • Logs are automatically deleted after 30 days
  • Provider API keys are securely stored

Link to section: What We Don't StoreWhat We Don't Store

Link to section: Your PromptsYour Prompts

We never store the actual content of your prompts. When you scan text:

  • The prompt is sent to our model for analysis
  • The model returns a score
  • The prompt is immediately discarded
  • Only the score and metadata are logged

Example of what we DON'T store:

NOT STORED: "Ignore all previous instructions and..."

Example of what we DO log:

STORED (metadata only):
{
  "request_id": "req_abc123",
  "safe": false,
  "confidence": 0.95,
  "injection": 0.95,
  "prompt_length": 68,
  "timestamp": "2024-01-15T12:00:00Z"
}

Link to section: Training DataTraining Data

Your prompts are never used to train or improve models. Our detection models are trained exclusively on public datasets and research data. Your scans remain completely private.

Link to section: Personal InformationPersonal Information

We don't collect or store:

  • Prompt content
  • User input text
  • IP addresses (except for rate limiting)
  • Browser fingerprints
  • Device information
  • Location data

Link to section: What We Do LogWhat We Do Log

Link to section: Metadata OnlyMetadata Only

We log minimal metadata for your audit trail and debugging:

Scan Events:

  • Request ID (unique identifier)
  • Timestamp
  • Scan results (scores only)
  • Prompt length (character count)
  • Sensitivity level used
  • Status (success/error)

Proxy Events:

  • Request ID
  • Timestamp
  • Provider name (e.g., "openai")
  • Model name (e.g., "gpt-4")
  • Scan results (scores only)
  • Status

Webhook Events:

  • Request ID
  • Timestamp
  • Webhook URL
  • HTTP status code
  • Delivery status

Link to section: Log RetentionLog Retention

Logs are automatically deleted after 30 days. This provides a reasonable audit trail while respecting your privacy.

You can view your logs anytime in the dashboard during the 30-day retention period.

Link to section: API Key SecurityAPI Key Security

Link to section: LockLLM API KeysLockLLM API Keys

Your LockLLM API keys:

  • Stored securely in our database
  • Hashed before storage (not reversible)
  • Never exposed in logs or responses
  • Only the last 4 characters shown in dashboard
  • Can be revoked instantly

Best practices:

  • Store keys in environment variables
  • Never commit keys to source control
  • Rotate keys every 90 days
  • Use different keys per environment
  • Revoke compromised keys immediately

Link to section: Provider API Keys (Proxy Mode)Provider API Keys (Proxy Mode)

Your provider API keys (OpenAI, Anthropic, etc.):

  • Encrypted at rest with industry-standard encryption
  • Never exposed in API responses
  • Only accessible to your account
  • Can be deleted anytime
  • Stored separately from scan data

How it works:

  1. You add your OpenAI/Anthropic key to dashboard
  2. Key is encrypted before storage
  3. Key is decrypted only when making proxy requests
  4. Key never leaves our secure environment

You maintain full control:

  • View which keys you've added
  • Enable/disable keys anytime
  • Delete keys permanently
  • Keys are tied only to your account

Link to section: HTTPS EverywhereHTTPS Everywhere

All LockLLM services use HTTPS:

  • API endpoints (api.lockllm.com)
  • Dashboard (www.lockllm.com)
  • Webhooks (HTTPS required)
  • Browser extension communication

HTTP is not supported for security reasons.

Link to section: Webhook SecurityWebhook Security

Link to section: HTTPS RequiredHTTPS Required

Webhook URLs must use HTTPS. We reject HTTP webhooks to prevent:

  • Man-in-the-middle attacks
  • Data interception
  • Credential theft

Link to section: Private URLs BlockedPrivate URLs Blocked

Webhooks to private/localhost URLs are blocked to prevent SSRF (Server-Side Request Forgery) attacks:

  • localhost, 127.0.0.1
  • 10.0.0.0/8 (private networks)
  • 192.168.0.0/16 (private networks)
  • 169.254.0.0/16 (link-local)

Use publicly accessible HTTPS endpoints only.

Link to section: Signature VerificationSignature Verification

Add a secret when creating webhooks to verify authenticity:

// Your webhook endpoint
app.post('/webhooks/lockllm', (req, res) => {
  const signature = req.headers['x-lockllm-signature']

  if (signature !== YOUR_SECRET) {
    return res.status(401).send('Invalid signature')
  }

  // Process webhook
  res.status(200).send('OK')
})

This ensures webhooks are actually from LockLLM.

Link to section: Browser Extension SecurityBrowser Extension Security

Link to section: Minimal PermissionsMinimal Permissions

The extension requests only necessary permissions:

  • Clipboard: For auto-scan feature (optional)
  • Active tab: For right-click context menu
  • Storage: To save API key securely

Link to section: Secure StorageSecure Storage

Your API key is stored using Chrome's secure storage:

  • Encrypted by Chrome
  • Isolated per extension
  • Synced across devices (optional)
  • Removed when extension is uninstalled

Link to section: No TrackingNo Tracking

The extension does not:

  • Track your browsing history
  • Monitor background activity
  • Collect analytics
  • Share data with third parties

Link to section: AuthenticationAuthentication

Link to section: How Authentication WorksHow Authentication Works

LockLLM uses API key authentication:

  1. You create an API key in the dashboard
  2. Include it in the Authorization header
  3. We validate the key for each request
  4. Requests without valid keys are rejected

No password exposure: Your account password is never used for API authentication.

Link to section: Session SecuritySession Security

Dashboard sessions are secured with:

  • Secure session cookies
  • HTTPS-only cookies
  • Session expiration
  • CSRF protection
  • XSS protection

Link to section: ComplianceCompliance

Link to section: Data ProcessingData Processing

LockLLM processes data as a data processor under GDPR:

  • You control your data
  • We process only as instructed
  • We don't use data for other purposes
  • You can delete your data anytime

Link to section: Data LocationData Location

LockLLM services are hosted on:

  • Cloudflare network (global edge)
  • Data processed in nearest edge location
  • No data residency in specific countries

Link to section: Data DeletionData Deletion

You can delete your data anytime:

  • Revoke API keys (stops all access)
  • Delete provider API keys
  • Logs auto-delete after 30 days
  • Account deletion removes all data

Link to section: Security Best PracticesSecurity Best Practices

Link to section: For API UsersFor API Users

  1. Store keys securely

    // Bad
    const apiKey = 'sk_live_...'
    
    // Good
    const apiKey = process.env.LOCKLLM_API_KEY
    
  2. Rotate keys regularly

    • Every 90 days minimum
    • After team member changes
    • If key may be compromised
  3. Use different keys per environment

    • Development key
    • Staging key
    • Production key
  4. Monitor key usage

    • Check "Last Used" in dashboard
    • Review activity logs
    • Alert on unusual patterns

Link to section: For Proxy Mode UsersFor Proxy Mode Users

  1. Rotate provider keys

    • Rotate OpenAI/Anthropic keys regularly
    • Use dashboard to update keys
    • Test after rotation
  2. Enable/disable keys for testing

    • Disable instead of deleting
    • Test new keys safely
    • Quick rollback if issues
  3. Use nicknames

    • Identify keys easily
    • Track which key is which
    • Organize by purpose

Link to section: For Webhook UsersFor Webhook Users

  1. Use signature verification

    • Add a secret to webhooks
    • Verify in your endpoint
    • Reject invalid signatures
  2. Implement idempotency

    const processed = new Set()
    
    app.post('/webhook', (req, res) => {
      if (processed.has(req.body.request_id)) {
        return res.status(200).send('Already processed')
      }
    
      // Process webhook
      processed.add(req.body.request_id)
      res.status(200).send('OK')
    })
    
  3. Rate limit your endpoint

    • Prevent abuse
    • Handle retry attempts
    • Protect your infrastructure

Link to section: Incident ResponseIncident Response

Link to section: If Your API Key Is CompromisedIf Your API Key Is Compromised

  1. Revoke immediately in dashboard
  2. Generate new key
  3. Update applications with new key
  4. Review activity logs for unauthorized use
  5. Contact support if suspicious activity detected

Link to section: If You Detect Security IssuesIf You Detect Security Issues

We take security seriously. If you find a vulnerability:

  1. Do not exploit or test it
  2. Email [email protected]
  3. Include detailed description
  4. Provide steps to reproduce
  5. Wait for our response before disclosure

We'll investigate and respond within 24 hours.

Link to section: TransparencyTransparency

Link to section: Open About SecurityOpen About Security

We're transparent about our security practices:

  • This documentation explains how we protect data
  • Privacy policy details data handling
  • Terms of service outline responsibilities
  • Security page lists current practices

Link to section: Regular UpdatesRegular Updates

We continuously improve security:

  • Regular security audits
  • Dependency updates
  • Industry best practices
  • Community feedback

Link to section: No Hidden TrackingNo Hidden Tracking

We don't do:

  • Analytics on your prompts
  • Behavioral tracking
  • Third-party data sharing
  • Advertising

Link to section: FAQFAQ

Link to section: Is my data secure?Is my data secure?

Yes! Your prompts are never stored, only metadata is logged, and logs are deleted after 30 days. All communications use HTTPS and provider API keys are encrypted.

Link to section: Do you store my prompts?Do you store my prompts?

No. Prompts are analyzed in real-time and immediately discarded. Only scan results (scores) are logged - never the actual prompt content.

Link to section: Are my prompts used to train models?Are my prompts used to train models?

No. Your prompts are never used for model training. Our models are trained exclusively on public datasets and research data.

Link to section: How are API keys protected?How are API keys protected?

LockLLM API keys are hashed before storage. Provider API keys (for proxy mode) are encrypted at rest. Both are never exposed in logs or responses.

Link to section: How long are logs kept?How long are logs kept?

Logs are automatically deleted after 30 days. This provides audit trails while respecting your privacy.

Link to section: Can I delete my data?Can I delete my data?

Yes! Revoke your API keys, delete provider keys, and logs auto-delete after 30 days. You can also request account deletion which removes all associated data.

Link to section: Is LockLLM GDPR compliant?Is LockLLM GDPR compliant?

Yes. We process data as instructed, don't use it for other purposes, provide data deletion, and maintain security measures required by GDPR.

Link to section: What happens if there's a breach?What happens if there's a breach?

We have incident response procedures and will notify affected users immediately. We maintain security monitoring and regular audits to prevent breaches.

Updated 2 days ago