Reverse Proxy Integration

Set up a reverse proxy to automatically scan all LLM requests with LockLLM for transparent security.

Recommended: Use the built-in Proxy Mode instead of setting up your own reverse proxy. It's easier, more secure, and requires zero infrastructure management. This guide is for advanced users who need custom proxy setups.

Link to section: OverviewOverview

A reverse proxy integration allows you to scan all LLM requests automatically without modifying your application code. LockLLM sits between your application and the LLM provider, inspecting and filtering requests in real-time.

Why use Proxy Mode instead?

  • No infrastructure to manage
  • Works with official SDKs
  • Automatic updates
  • Better security
  • Zero configuration

Learn about Proxy Mode

Link to section: When to Use Custom Reverse ProxyWhen to Use Custom Reverse Proxy

Use a custom reverse proxy only if you need:

  • Complex custom routing logic
  • On-premise deployment requirements
  • Integration with existing proxy infrastructure
  • Advanced caching or rate limiting beyond LockLLM
  • Custom modifications to requests/responses

For most users: Proxy Mode is the better choice.

Link to section: Supported AI ProvidersSupported AI Providers

The reverse proxy integration works with all major LLM providers:

  • OpenAI
  • Anthropic
  • Google Gemini
  • Cohere
  • Mistral AI
  • Azure OpenAI
  • AWS Bedrock
  • And 10+ more

The proxy works by intercepting the standard chat completion format used by most providers.

Link to section: ArchitectureArchitecture

Your App → Reverse Proxy (LockLLM) → LLM Provider (OpenAI, Claude, etc.)

The reverse proxy:

  1. Receives requests from your application
  2. Scans the prompt with LockLLM
  3. Blocks malicious requests or forwards safe ones
  4. Returns the LLM response to your application

Link to section: Nginx ConfigurationNginx Configuration

Link to section: Basic SetupBasic Setup

Create an Nginx configuration file:

upstream llm_backend {
    server api.openai.com:443;
}

upstream lockllm_api {
    server api.lockllm.com:443;
}

server {
    listen 443 ssl;
    server_name your-proxy.example.com;

    ssl_certificate /path/to/cert.pem;
    ssl_certificate_key /path/to/key.pem;

    location /v1/chat/completions {
        # First, scan with LockLLM
        access_by_lua_block {
            local http = require "resty.http"
            local cjson = require "cjson"

            -- Read request body
            ngx.req.read_body()
            local body = ngx.req.get_body_data()
            local data = cjson.decode(body)

            -- Extract user message
            local user_message = ""
            for _, msg in ipairs(data.messages or {}) do
                if msg.role == "user" then
                    user_message = msg.content
                    break
                end
            end

            -- Scan with LockLLM
            local httpc = http.new()
            local res, err = httpc:request_uri("https://api.lockllm.com/v1/scan", {
                method = "POST",
                body = cjson.encode({input = user_message}),
                headers = {
                    ["Content-Type"] = "application/json",
                    ["Authorization"] = "Bearer YOUR_LOCKLLM_API_KEY"
                }
            })

            if res and res.body then
                local scan_result = cjson.decode(res.body)
                if not scan_result.safe then
                    ngx.status = 400
                    ngx.say(cjson.encode({
                        error = "Request blocked by security filter",
                        reason = scan_result.reason
                    }))
                    return ngx.exit(400)
                end
            end
        }

        # Forward to LLM provider
        proxy_pass https://llm_backend;
        proxy_ssl_server_name on;
        proxy_set_header Host api.openai.com;
        proxy_set_header Authorization $http_authorization;
    }
}

Link to section: Apache ConfigurationApache Configuration

Link to section: Using mod_luaUsing mod_lua

<VirtualHost *:443>
    ServerName your-proxy.example.com

    SSLEngine on
    SSLCertificateFile /path/to/cert.pem
    SSLCertificateKeyFile /path/to/key.pem

    <Location /v1/chat/completions>
        LuaHookAccessChecker /path/to/lockllm_checker.lua lockllm_check

        ProxyPass https://api.openai.com/v1/chat/completions
        ProxyPassReverse https://api.openai.com/v1/chat/completions
    </Location>
</VirtualHost>

lockllm_checker.lua:

local http = require("socket.http")
local ltn12 = require("ltn12")
local json = require("cjson")

function lockllm_check(r)
    -- Read request body
    r:parsebody()
    local body = r:requestbody()
    local data = json.decode(body)

    -- Extract user message
    local user_message = ""
    for _, msg in pairs(data.messages or {}) do
        if msg.role == "user" then
            user_message = msg.content
            break
        end
    end

    -- Scan with LockLLM
    local response_body = {}
    local res, code = http.request{
        url = "https://api.lockllm.com/v1/scan",
        method = "POST",
        headers = {
            ["Content-Type"] = "application/json",
            ["Authorization"] = "Bearer YOUR_LOCKLLM_API_KEY"
        },
        source = ltn12.source.string(json.encode({input = user_message})),
        sink = ltn12.sink.table(response_body)
    }

    if res and code == 200 then
        local scan_result = json.decode(table.concat(response_body))
        if not scan_result.safe then
            r.status = 400
            r:puts(json.encode({
                error = "Request blocked by security filter",
                reason = scan_result.reason
            }))
            return apache2.DONE
        end
    end

    return apache2.OK
end

Link to section: Node.js ProxyNode.js Proxy

For a lightweight Node.js solution:

const express = require('express')
const { createProxyMiddleware } = require('http-proxy-middleware')
const fetch = require('node-fetch')

const app = express()
app.use(express.json())

// LockLLM scanning middleware
async function lockllmScan(req, res, next) {
  try {
    // Extract user message from request
    const userMessage = req.body.messages?.find(m => m.role === 'user')?.content || ''

    // Scan with LockLLM
    const scanResponse = await fetch('https://api.lockllm.com/v1/scan', {
      method: 'POST',
      headers: {
        'Authorization': `Bearer ${process.env.LOCKLLM_API_KEY}`,
        'Content-Type': 'application/json'
      },
      body: JSON.stringify({ input: userMessage })
    })

    const scanResult = await scanResponse.json()

    // Block if malicious
    if (!scanResult.safe) {
      return res.status(400).json({
        error: 'Request blocked by security filter',
        reason: scanResult.reason,
        score: scanResult.score
      })
    }

    next()
  } catch (error) {
    console.error('LockLLM scan error:', error)
    next() // Fail open in case of errors
  }
}

// Apply scanning middleware
app.use('/v1/*', lockllmScan)

// Proxy to OpenAI
app.use('/v1', createProxyMiddleware({
  target: 'https://api.openai.com',
  changeOrigin: true,
  onProxyReq: (proxyReq, req) => {
    // Forward the OpenAI API key
    proxyReq.setHeader('Authorization', req.headers.authorization)
  }
}))

app.listen(3000, () => {
  console.log('LockLLM proxy running on port 3000')
})

Link to section: Docker Compose SetupDocker Compose Setup

Create a complete containerized setup:

version: '3.8'

services:
  nginx-proxy:
    image: openresty/openresty:alpine
    ports:
      - "443:443"
    volumes:
      - ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
      - ./certs:/etc/nginx/certs:ro
    environment:
      - LOCKLLM_API_KEY=${LOCKLLM_API_KEY}
    depends_on:
      - cache

  cache:
    image: redis:alpine
    ports:
      - "6379:6379"

Link to section: Caching Scan ResultsCaching Scan Results

Improve performance by caching scan results:

const redis = require('redis')
const crypto = require('crypto')

const cache = redis.createClient()

async function getCachedScan(text) {
  const hash = crypto.createHash('sha256').update(text).digest('hex')
  const cached = await cache.get(`scan:${hash}`)
  return cached ? JSON.parse(cached) : null
}

async function cacheScanResult(text, result) {
  const hash = crypto.createHash('sha256').update(text).digest('hex')
  await cache.setex(`scan:${hash}`, 3600, JSON.stringify(result)) // Cache for 1 hour
}

async function lockllmScanWithCache(text) {
  // Check cache first
  const cached = await getCachedScan(text)
  if (cached) return cached

  // Scan with LockLLM
  const result = await scanWithLockLLM(text)

  // Cache result
  await cacheScanResult(text, result)

  return result
}

Link to section: Configuration OptionsConfiguration Options

Link to section: Fail-Safe ModeFail-Safe Mode

Choose how to handle LockLLM service errors:

  • Fail Closed: Block all requests if LockLLM is unavailable (more secure)
  • Fail Open: Allow requests if LockLLM is unavailable (better availability)
const FAIL_MODE = 'open' // or 'closed'

try {
  const scanResult = await lockllmScan(text)
  if (scanResult.flagged) {
    return blockRequest()
  }
} catch (error) {
  if (FAIL_MODE === 'closed') {
    return blockRequest('Security service unavailable')
  }
  // Fail open: allow request
  console.warn('LockLLM unavailable, allowing request')
}

Link to section: Custom ThresholdsCustom Thresholds

Set different thresholds based on request type:

const thresholds = {
  'public': 0.5,    // Lower threshold for public endpoints
  'internal': 0.7,  // Medium threshold for internal users
  'admin': 0.9      // Higher threshold for admin users
}

const threshold = thresholds[req.user.role] || 0.7

Link to section: Monitoring and LoggingMonitoring and Logging

Link to section: Structured LoggingStructured Logging

async function lockllmScan(req, res, next) {
  const startTime = Date.now()

  try {
    const scanResult = await scan(req.body.text)

    console.log(JSON.stringify({
      timestamp: new Date().toISOString(),
      requestId: req.id,
      userId: req.user?.id,
      flagged: scanResult.flagged,
      score: scanResult.score,
      categories: scanResult.categories,
      latency: Date.now() - startTime
    }))

    if (scanResult.flagged) {
      return res.status(400).json({
        error: 'Request blocked',
        reason: scanResult.reason
      })
    }

    next()
  } catch (error) {
    console.error('Scan error:', error)
    next()
  }
}

Link to section: MetricsMetrics

Track important metrics:

  • Scan latency
  • Block rate
  • False positive rate
  • Cache hit rate
  • Error rate

Link to section: Best PracticesBest Practices

  1. Use HTTPS: Always use SSL/TLS for proxy connections
  2. Enable Caching: Cache scan results to reduce latency and API calls
  3. Implement Retries: Retry failed scans with exponential backoff
  4. Monitor Performance: Track scan latency and adjust timeouts
  5. Log Blocked Requests: Keep audit logs of blocked requests
  6. Set Appropriate Timeouts: Configure reasonable timeouts (e.g., 5 seconds)
  7. Rate Limiting: Implement client-side rate limiting
  8. Fail-Safe Strategy: Choose appropriate fail-safe behavior for your use case

Link to section: TroubleshootingTroubleshooting

Link to section: High LatencyHigh Latency

  • Enable caching for frequently scanned texts
  • Use connection pooling
  • Deploy proxy closer to LLM provider
  • Increase timeout values

Link to section: False PositivesFalse Positives

  • Adjust threshold values
  • Review blocked requests in logs
  • Contact support with examples

Link to section: Proxy Not BlockingProxy Not Blocking

  • Verify API key is correct
  • Check proxy logs for errors
  • Test LockLLM API directly
  • Verify request body parsing
Updated 2 days ago