JavaScript/TypeScript SDK
Native JavaScript/TypeScript SDK for prompt security with drop-in replacements for OpenAI, Anthropic, and 15 other providers. Supports scan modes, custom policy enforcement, abuse detection, smart routing, PII detection and redaction, response caching, and prompt compression.
Link to section: IntroductionIntroduction
The LockLLM JavaScript/TypeScript SDK is a production-ready library that provides comprehensive AI security for your LLM applications. Built with TypeScript and designed for modern JavaScript environments, it offers drop-in replacements for popular AI provider SDKs with automatic prompt injection detection and jailbreak prevention.
Key features:
- Real-time security scanning with minimal latency (<250ms)
- Drop-in replacements for 17+ AI providers (custom endpoint support for each)
- Full TypeScript support with comprehensive type definitions
- Zero external dependencies
- Works in Node.js, browsers, and Edge runtimes
- ESM and CommonJS support
- Streaming-compatible with all providers
- Configurable scan modes with custom policy enforcement
- AI abuse detection (opt-in) for bot content, repetition, and resource exhaustion
- PII detection and redaction (opt-in) for names, emails, phone numbers, and more
- Smart routing for automatic model selection by task and complexity
- Response caching for cost optimization
- Universal proxy mode for 200+ models without provider keys
- Completely free with unlimited usage
Use cases:
- Production LLM applications requiring security
- AI agents and autonomous systems
- Chatbots and conversational interfaces
- RAG (Retrieval Augmented Generation) systems
- Multi-tenant AI applications
- Enterprise AI deployments
Link to section: InstallationInstallation
Install the SDK using your preferred package manager:
# npm
npm install @lockllm/sdk
# yarn
yarn add @lockllm/sdk
# pnpm (recommended - faster, saves disk space)
pnpm add @lockllm/sdk
# bun
bun add @lockllm/sdk
Requirements:
- Node.js 14.0 or higher
- TypeScript 4.5 or higher (for TypeScript projects)
Link to section: Peer DependenciesPeer Dependencies
For provider wrapper functions, install the relevant official SDKs:
# For OpenAI and OpenAI-compatible providers
npm install openai
# For Anthropic Claude
npm install @anthropic-ai/sdk
# For Cohere
npm install cohere-ai
Provider SDK mapping:
openai- OpenAI, Groq, DeepSeek, Mistral, Perplexity, OpenRouter, Together AI, xAI, Fireworks, Anyscale, Hugging Face, Gemini, Azure, Bedrock, Vertex AI@anthropic-ai/sdk- Anthropic Claudecohere-ai- Cohere (optional)
Peer dependencies are only required if you use the wrapper functions for those providers.
Link to section: Quick StartQuick Start
Link to section: Step 1: Get Your API KeysStep 1: Get Your API Keys
- Visit lockllm.com and create a free account
- Navigate to API Keys section and copy your LockLLM API key
- Go to Proxy Settings and add your provider API keys (OpenAI, Anthropic, etc.)
Your provider keys are encrypted and stored securely. You'll only need your LockLLM API key in your code.
Link to section: Step 2: Basic UsageStep 2: Basic Usage
Choose from four integration methods: wrapper functions (easiest), direct scan API, official SDKs with custom baseURL, or universal proxy.
Link to section: Wrapper Functions (Recommended)Wrapper Functions (Recommended)
The simplest way to add security - replace your SDK initialization:
import { createOpenAI } from '@lockllm/sdk/wrappers';
// Before:
// const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
// After (one line change, with optional security configuration):
const openai = createOpenAI({
apiKey: process.env.LOCKLLM_API_KEY,
proxyOptions: {
scanAction: 'block', // Block injection attacks
policyAction: 'block' // Block custom policy violations
}
});
// Everything else works exactly the same
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: userInput }]
});
console.log(response.choices[0].message.content);
Link to section: Direct Scan APIDirect Scan API
For manual control and custom workflows:
import { LockLLM } from '@lockllm/sdk';
const lockllm = new LockLLM({
apiKey: process.env.LOCKLLM_API_KEY
});
// Scan user input before processing
const result = await lockllm.scan({
input: userPrompt,
sensitivity: "medium", // "low" | "medium" | "high"
mode: "combined" // Check both core security and custom policies
}, {
scanAction: "block", // Block prompts with injection attacks
policyAction: "allow_with_warning" // Warn on policy violations but don't block
});
if (!result.safe) {
console.log("Security issue detected!");
console.log("Request ID:", result.request_id);
// Check for injection details
if (result.injection) {
console.log("Injection score:", result.injection);
console.log("Confidence:", result.confidence);
}
// Check for custom policy violations
if (result.policy_warnings) {
for (const violation of result.policy_warnings) {
console.log("Policy:", violation.policy_name);
console.log("Categories:", violation.violated_categories.map(c => c.name));
}
}
return { error: "Invalid input detected" };
}
// Safe to proceed
const response = await yourLLMCall(userPrompt);
Link to section: Official SDKs with ProxyOfficial SDKs with Proxy
Use official SDKs with LockLLM's proxy:
import OpenAI from 'openai';
import { getProxyURL } from '@lockllm/sdk';
const client = new OpenAI({
apiKey: process.env.LOCKLLM_API_KEY,
baseURL: getProxyURL('openai')
});
// Works exactly like the official SDK
const response = await client.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: "Hello!" }]
});
Link to section: Universal ProxyUniversal Proxy
Access 200+ models using LockLLM credits - no provider API keys needed. You can browse all supported models and their IDs in the Model List page in your dashboard. When making requests, you must use the exact model ID shown there (e.g., openai/gpt-4).
import OpenAI from 'openai';
import { getUniversalProxyURL } from '@lockllm/sdk';
const client = new OpenAI({
apiKey: process.env.LOCKLLM_API_KEY,
baseURL: getUniversalProxyURL()
});
const response = await client.chat.completions.create({
model: "openai/gpt-4", // Use the model ID from the Model List page
messages: [{ role: "user", content: "Hello!" }]
});
Link to section: Provider WrappersProvider Wrappers
LockLLM provides drop-in replacements for 17+ AI providers with custom endpoint support. All wrappers work identically to the official SDKs with automatic security scanning.
Link to section: OpenAIOpenAI
import { createOpenAI } from '@lockllm/sdk/wrappers';
const openai = createOpenAI({
apiKey: process.env.LOCKLLM_API_KEY
});
// Chat completions
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: userInput }
],
temperature: 0.7,
max_tokens: 1000
});
// Streaming
const stream = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: "Count from 1 to 10" }],
stream: true
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}
// Function calling
const functionResponse = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: "What's the weather in Boston?" }],
functions: [{
name: "get_weather",
description: "Get the current weather in a location",
parameters: {
type: "object",
properties: {
location: { type: "string", description: "City name" },
unit: { type: "string", enum: ["celsius", "fahrenheit"] }
},
required: ["location"]
}
}]
});
Link to section: Configuring Security BehaviorConfiguring Security Behavior
All wrapper functions accept a proxyOptions parameter to control scanning, policy enforcement, abuse detection, routing, and caching at initialization:
import { createOpenAI } from '@lockllm/sdk/wrappers';
const openai = createOpenAI({
apiKey: process.env.LOCKLLM_API_KEY,
proxyOptions: {
// Scan mode: 'normal' (core only), 'policy_only', or 'combined' (default)
scanMode: 'combined',
// Core injection action: 'block' or 'allow_with_warning' (default)
scanAction: 'block',
// Custom policy action: 'block' or 'allow_with_warning' (default)
policyAction: 'block',
// Abuse detection: null (disabled, default), 'block', or 'allow_with_warning'
abuseAction: 'block',
// Smart routing: 'disabled' (default), 'auto', or 'custom'
routeAction: 'auto',
// PII detection: null (disabled, default), 'strip', 'block', or 'allow_with_warning'
piiAction: 'strip',
// Detection sensitivity: 'low', 'medium' (default), or 'high'
sensitivity: 'medium',
// Response caching (default: enabled)
cacheResponse: true,
// Cache TTL in seconds (default: 3600)
cacheTTL: 3600
}
});
// All requests through this client use the configured security settings
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: userInput }]
});
The proxyOptions parameter works with every wrapper: createOpenAI, createAnthropic, createGroq, createDeepSeek, and all other providers.
Default behavior (when no proxyOptions are provided):
- Scan Mode:
combined- checks both core security and custom policies - Scan Action:
allow_with_warning- threats are detected but requests are allowed - Policy Action:
allow_with_warning- violations are detected but requests are allowed - Abuse Detection: Disabled (opt-in only)
- PII Detection: Disabled (opt-in only)
- Routing: Disabled
- Sensitivity:
medium - Response Caching: Enabled with 1-hour TTL
Link to section: Anthropic ClaudeAnthropic Claude
import { createAnthropic } from '@lockllm/sdk/wrappers';
const anthropic = createAnthropic({
apiKey: process.env.LOCKLLM_API_KEY
});
// Messages API
const message = await anthropic.messages.create({
model: "claude-sonnet-4-5-20250514",
max_tokens: 1024,
messages: [
{ role: "user", content: userInput }
]
});
console.log(message.content[0].text);
// Streaming
const stream = await anthropic.messages.create({
model: "claude-sonnet-4-5-20250514",
max_tokens: 1024,
messages: [{ role: "user", content: "Write a poem" }],
stream: true
});
for await (const event of stream) {
if (event.type === 'content_block_delta' &&
event.delta.type === 'text_delta') {
process.stdout.write(event.delta.text);
}
}
Link to section: Groq (Fast Inference)Groq (Fast Inference)
import { createGroq } from '@lockllm/sdk/wrappers';
const groq = createGroq({
apiKey: process.env.LOCKLLM_API_KEY
});
const response = await groq.chat.completions.create({
model: 'llama-3.1-70b-versatile',
messages: [{ role: 'user', content: userInput }]
});
Link to section: DeepSeekDeepSeek
import { createDeepSeek } from '@lockllm/sdk/wrappers';
const deepseek = createDeepSeek({
apiKey: process.env.LOCKLLM_API_KEY
});
const response = await deepseek.chat.completions.create({
model: 'deepseek-chat',
messages: [{ role: 'user', content: userInput }]
});
Link to section: PerplexityPerplexity
import { createPerplexity } from '@lockllm/sdk/wrappers';
const perplexity = createPerplexity({
apiKey: process.env.LOCKLLM_API_KEY
});
const response = await perplexity.chat.completions.create({
model: 'llama-3.1-sonar-huge-128k-online',
messages: [{ role: 'user', content: userInput }]
});
Link to section: All Supported ProvidersAll Supported Providers
Import any wrapper function from @lockllm/sdk/wrappers:
import {
createOpenAI, // OpenAI GPT models
createAnthropic, // Anthropic Claude
createGroq, // Groq LPU inference
createDeepSeek, // DeepSeek models
createPerplexity, // Perplexity (with internet)
createMistral, // Mistral AI
createOpenRouter, // OpenRouter (multi-provider)
createTogether, // Together AI
createXAI, // xAI Grok
createFireworks, // Fireworks AI
createAnyscale, // Anyscale Endpoints
createHuggingFace, // Hugging Face Inference
createGemini, // Google Gemini
createCohere, // Cohere
createAzure, // Azure OpenAI
createBedrock, // AWS Bedrock
createVertexAI // Google Vertex AI
} from '@lockllm/sdk/wrappers';
Provider compatibility:
- 15 providers use OpenAI-compatible API (require
openaipackage) - Anthropic uses its own SDK (requires
@anthropic-ai/sdk) - Cohere uses its own SDK (requires
cohere-ai, optional) - All providers support custom endpoint URLs via dashboard
Import paths: Wrapper functions can be imported from either path:
import { createOpenAI } from '@lockllm/sdk/wrappers'- Direct import (recommended)import { createOpenAI } from '@lockllm/sdk'- Re-exported from main entry point
Link to section: Generic Factory FunctionsGeneric Factory Functions
For advanced use cases, the SDK provides generic factory functions:
Link to section: createOpenAICompatiblecreateOpenAICompatible
Create a client for any OpenAI-compatible provider:
import { createOpenAICompatible } from '@lockllm/sdk/wrappers';
// Use with any OpenAI-compatible provider
const client = createOpenAICompatible('groq', {
apiKey: process.env.LOCKLLM_API_KEY,
proxyOptions: {
scanAction: 'block'
}
});
const response = await client.chat.completions.create({
model: 'llama-3.1-70b-versatile',
messages: [{ role: 'user', content: 'Hello!' }]
});
Link to section: createClientcreateClient
Create a client for any provider using their native SDK constructor:
import { createClient } from '@lockllm/sdk/wrappers';
import { CohereClient } from 'cohere-ai';
const cohere = createClient('cohere', CohereClient, {
apiKey: process.env.LOCKLLM_API_KEY,
proxyOptions: {
scanMode: 'combined',
policyAction: 'block'
}
});
These are useful when you need full control over the SDK class used or are integrating with a provider that has a unique SDK interface.
Link to section: ConfigurationConfiguration
Link to section: LockLLM Client ConfigurationLockLLM Client Configuration
import { LockLLM, LockLLMConfig } from '@lockllm/sdk';
const config: LockLLMConfig = {
apiKey: process.env.LOCKLLM_API_KEY, // Required
baseURL: "https://api.lockllm.com", // Optional: custom endpoint
timeout: 60000, // Optional: request timeout (ms)
maxRetries: 3 // Optional: max retry attempts
};
const lockllm = new LockLLM(config);
Link to section: Sensitivity LevelsSensitivity Levels
Control detection strictness with the sensitivity parameter:
// Low sensitivity - fewer false positives
// Use for: creative applications, exploratory use cases
const lowResult = await lockllm.scan({
input: userPrompt,
sensitivity: "low",
mode: "combined"
});
// Medium sensitivity - balanced detection - DEFAULT
// Use for: general user inputs, standard applications
const mediumResult = await lockllm.scan({
input: userPrompt,
sensitivity: "medium",
mode: "combined"
});
// High sensitivity - maximum protection
// Use for: sensitive operations, admin panels, data exports
const highResult = await lockllm.scan({
input: userPrompt,
sensitivity: "high",
mode: "combined"
});
Choosing sensitivity:
- High: Critical systems (admin, payments, sensitive data)
- Medium: General applications (default, recommended)
- Low: Creative tools (writing assistants, brainstorming)
Link to section: Custom EndpointsCustom Endpoints
All providers support custom endpoint URLs for:
- Self-hosted LLM deployments (OpenAI-compatible APIs)
- Azure OpenAI resources with custom endpoints
- Alternative API gateways and reverse proxies
- Private cloud or air-gapped deployments
- Development and staging environments
How it works: Configure custom endpoints in the LockLLM dashboard when adding any provider API key. The SDK wrappers automatically use your custom endpoint URL.
// The wrapper automatically uses your custom endpoint
const azure = createAzure({
apiKey: process.env.LOCKLLM_API_KEY
});
// Your custom Azure endpoint is configured in the dashboard:
// - Endpoint: https://your-resource.openai.azure.com
// - Deployment: gpt-4
// - API Version: 2024-10-21
Example - Self-hosted model: If you have a self-hosted model with an OpenAI-compatible API, configure it in the dashboard using one of the OpenAI-compatible provider wrappers (e.g., OpenAI, Groq) with your custom endpoint URL.
// Use OpenAI wrapper with custom endpoint configured in dashboard
const openai = createOpenAI({
apiKey: process.env.LOCKLLM_API_KEY
});
// Dashboard configuration:
// - Provider: OpenAI
// - Custom Endpoint: https://your-self-hosted-llm.com/v1
// - API Key: your-model-api-key
Link to section: Request OptionsRequest Options
Override configuration per-request using ScanOptions:
// Per-request scan options
const result = await lockllm.scan(
{
input: userPrompt,
sensitivity: "high",
mode: "combined"
},
{
scanAction: "block", // Block core injection attacks
policyAction: "allow_with_warning", // Warn on policy violations
abuseAction: "block", // Opt-in: block abusive content
timeout: 30000, // 30 second timeout for this request
signal: controller.signal // AbortSignal for cancellation
}
);
// Wrapper functions support all official SDK options
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: userInput }],
// ... any OpenAI options
}, {
timeout: 30000 // Custom timeout
});
Link to section: Advanced FeaturesAdvanced Features
Link to section: Streaming ResponsesStreaming Responses
All provider wrappers support streaming:
// OpenAI streaming
const stream = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: "Write a story" }],
stream: true
});
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content;
if (content) {
process.stdout.write(content);
}
}
// Anthropic streaming
const anthropicStream = await anthropic.messages.create({
model: "claude-sonnet-4-5-20250514",
max_tokens: 1024,
messages: [{ role: "user", content: "Write a story" }],
stream: true
});
for await (const event of anthropicStream) {
if (event.type === 'content_block_delta' &&
event.delta.type === 'text_delta') {
process.stdout.write(event.delta.text);
}
}
Link to section: Scan ModesScan Modes
Control which security checks are performed:
import { LockLLM } from '@lockllm/sdk';
const lockllm = new LockLLM({
apiKey: process.env.LOCKLLM_API_KEY
});
// Normal mode - core injection detection only
const normalResult = await lockllm.scan({
input: userPrompt,
mode: "normal"
});
// Policy-only mode - check custom policies only
const policyResult = await lockllm.scan({
input: userPrompt,
mode: "policy_only"
});
// Combined mode (default) - check both core security and custom policies
const combinedResult = await lockllm.scan({
input: userPrompt,
mode: "combined"
}, {
scanAction: "block",
policyAction: "allow_with_warning"
});
Scan mode reference:
normal- Core injection detection only. Checks for prompt injection, jailbreaks, instruction override, and similar attacks.policy_only- Custom policy validation only. Checks content against your custom policies configured in the dashboard.combined(default) - Both core security and custom policy checks. Most comprehensive protection.
Link to section: Custom Policy EnforcementCustom Policy Enforcement
Define custom content policies in the LockLLM dashboard, and the SDK enforces them automatically. When a policy is violated with policyAction: 'block', a PolicyViolationError is thrown. With allow_with_warning, violations appear in the response's policy_warnings field.
import { createOpenAI } from '@lockllm/sdk/wrappers';
import { PolicyViolationError } from '@lockllm/sdk';
const openai = createOpenAI({
apiKey: process.env.LOCKLLM_API_KEY,
proxyOptions: {
scanMode: 'combined',
policyAction: 'block' // Block requests that violate your custom policies
}
});
try {
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: userInput }]
});
} catch (error) {
if (error instanceof PolicyViolationError) {
console.log("Custom policy violated!");
console.log("Request ID:", error.requestId);
for (const policy of error.violated_policies) {
console.log("Policy:", policy.policy_name);
for (const category of policy.violated_categories) {
console.log(" Category:", category.name);
}
if (policy.violation_details) {
console.log(" Details:", policy.violation_details);
}
}
}
}
Using allow_with_warning mode:
const result = await lockllm.scan(
{ input: userPrompt, mode: "combined" },
{ policyAction: "allow_with_warning" }
);
if (result.policy_warnings && result.policy_warnings.length > 0) {
for (const violation of result.policy_warnings) {
console.log("Warning - Policy:", violation.policy_name);
console.log("Categories:", violation.violated_categories.map(c => c.name));
}
}
Link to section: AI Abuse DetectionAI Abuse Detection
Protect your endpoints from automated misuse. Abuse detection is opt-in - enable it by setting abuseAction to 'block' or 'allow_with_warning'.
When enabled, LockLLM analyzes prompts for bot-generated content, repetitive patterns, and resource exhaustion attempts.
import { createOpenAI } from '@lockllm/sdk/wrappers';
import { AbuseDetectedError } from '@lockllm/sdk';
const openai = createOpenAI({
apiKey: process.env.LOCKLLM_API_KEY,
proxyOptions: {
abuseAction: 'block' // Opt-in: block detected abuse
}
});
try {
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: userInput }]
});
} catch (error) {
if (error instanceof AbuseDetectedError) {
console.log("Abuse detected!");
console.log("Confidence:", error.abuse_details.confidence);
console.log("Types:", error.abuse_details.abuse_types);
console.log("Recommendation:", error.abuse_details.recommendation);
}
}
Using allow_with_warning mode with the scan API:
const result = await lockllm.scan(
{ input: userPrompt },
{ abuseAction: "allow_with_warning" }
);
if (result.abuse_warnings) {
console.log("Abuse warning - confidence:", result.abuse_warnings.confidence);
console.log("Types:", result.abuse_warnings.abuse_types);
console.log("Indicators:", result.abuse_warnings.indicators);
}
Link to section: PII Detection and RedactionPII Detection and Redaction
Protect sensitive personal information from being sent to AI providers. PII detection is opt-in - enable it by setting piiAction to 'strip', 'block', or 'allow_with_warning'.
When enabled, LockLLM scans prompts for personally identifiable information including names, email addresses, phone numbers, social security numbers, credit card numbers, dates of birth, street addresses, and more.
Using strip mode (recommended for most use cases):
Strip mode automatically replaces detected PII with [TYPE] placeholders before the prompt is forwarded to the AI provider:
import { createOpenAI } from '@lockllm/sdk/wrappers';
const openai = createOpenAI({
apiKey: process.env.LOCKLLM_API_KEY,
proxyOptions: {
piiAction: 'strip' // Opt-in: replace detected PII with [TYPE] placeholders
}
});
// Input: "Contact John Smith at [email protected] or 555-123-4567"
// The AI provider receives: "Contact [First Name] [Last Name] at [Email] or [Phone Number]"
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: userInput }]
});
Using block mode:
Block mode rejects requests that contain PII, throwing a PIIDetectedError:
import { createOpenAI } from '@lockllm/sdk/wrappers';
import { PIIDetectedError } from '@lockllm/sdk';
const openai = createOpenAI({
apiKey: process.env.LOCKLLM_API_KEY,
proxyOptions: {
piiAction: 'block' // Opt-in: block requests containing PII
}
});
try {
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: userInput }]
});
} catch (error) {
if (error instanceof PIIDetectedError) {
console.log("PII detected in input!");
console.log("Entity types:", error.pii_details.entity_types);
console.log("Entity count:", error.pii_details.entity_count);
}
}
Using allow_with_warning mode with the scan API:
const result = await lockllm.scan(
{ input: userPrompt, mode: "combined" },
{ piiAction: "allow_with_warning" }
);
if (result.pii_result && result.pii_result.detected) {
console.log("PII found - types:", result.pii_result.entity_types);
console.log("Entity count:", result.pii_result.entity_count);
}
// With strip mode, the scan response includes the redacted text
const stripResult = await lockllm.scan(
{ input: "My email is [email protected]", mode: "combined" },
{ piiAction: "strip" }
);
if (stripResult.pii_result?.redacted_input) {
console.log("Redacted text:", stripResult.pii_result.redacted_input);
// Output: "My email is [Email]"
}
PII action modes:
- Not set /
null(default) - PII detection is disabled entirely 'strip'- Detect PII and replace with[TYPE]placeholders before forwarding to the AI provider. In scan mode, returns redacted text in the response.'block'- Reject the request if PII is detected. ThrowsPIIDetectedErrorin proxy mode.'allow_with_warning'- Detect PII and include results in the response, but allow the request through.
Link to section: Smart RoutingSmart Routing
Let LockLLM automatically select the best model for each request based on detected task type and prompt complexity. This can reduce costs by routing simpler requests to more efficient models.
import { createOpenAI } from '@lockllm/sdk/wrappers';
// Auto routing - LockLLM selects the optimal model
const openai = createOpenAI({
apiKey: process.env.LOCKLLM_API_KEY,
proxyOptions: {
routeAction: 'auto' // Enable automatic model selection
}
});
// Or use custom routing rules configured in the dashboard
const openaiCustom = createOpenAI({
apiKey: process.env.LOCKLLM_API_KEY,
proxyOptions: {
routeAction: 'custom' // Use your dashboard routing rules
}
});
Routing metadata is available in scan responses and proxy response headers:
const result = await lockllm.scan(
{ input: userPrompt, mode: "combined" },
{ scanAction: "allow_with_warning" }
);
if (result.routing) {
console.log("Task type:", result.routing.task_type);
console.log("Complexity:", result.routing.complexity);
console.log("Selected model:", result.routing.selected_model);
console.log("Reasoning:", result.routing.reasoning);
}
Routing modes:
disabled(default) - No routing. Uses the model you specified.auto- LockLLM analyzes each prompt and selects the optimal model based on task type and complexity.custom- Uses routing rules you define in the dashboard based on task type and complexity tier.
Configure custom routing rules in the dashboard to map specific task types (Code Generation, Summarization, etc.) and complexity levels (low, medium, high) to your preferred models and providers.
Link to section: Prompt CompressionPrompt Compression
Opt-in prompt compression reduces token count before sending prompts to AI providers. Three compression methods are available: TOON for structured JSON data, Compact for general text, and Combined for maximum compression.
Scan API usage:
import LockLLM from '@lockllm/sdk';
const lockllm = new LockLLM({ apiKey: process.env.LOCKLLM_API_KEY });
// TOON - converts JSON to compact notation (free)
const result = await lockllm.scan(
{ input: '{"users": [{"id": 1, "name": "Alice"}, {"id": 2, "name": "Bob"}]}' },
{ compressionAction: 'toon' }
);
if (result.compression_result) {
console.log(`Method: ${result.compression_result.method}`);
console.log(`Original: ${result.compression_result.original_length} chars`);
console.log(`Compressed: ${result.compression_result.compressed_length} chars`);
console.log(`Ratio: ${result.compression_result.compression_ratio}`);
console.log(`Compressed text: ${result.compression_result.compressed_input}`);
}
// Compact - ML-based compression for any text ($0.0001/use)
const compactResult = await lockllm.scan(
{ input: 'A long prompt with detailed instructions that could be compressed...' },
{ compressionAction: 'compact', compressionRate: 0.5 }
);
if (compactResult.compression_result) {
console.log(`Compressed to ${(compactResult.compression_result.compression_ratio * 100).toFixed(0)}% of original`);
console.log(`Compressed text: ${compactResult.compression_result.compressed_input}`);
}
// Combined - TOON first, then Compact for maximum compression ($0.0001/use)
const combinedResult = await lockllm.scan(
{ input: '{"users": [{"id": 1, "name": "Alice", "email": "[email protected]"}, {"id": 2, "name": "Bob", "email": "[email protected]"}]}' },
{ compressionAction: 'combined', compressionRate: 0.5 }
);
if (combinedResult.compression_result) {
console.log(`Combined compressed to ${(combinedResult.compression_result.compression_ratio * 100).toFixed(0)}% of original`);
}
Proxy mode usage:
import { createOpenAI } from '@lockllm/sdk/wrappers';
// TOON - compress JSON prompts before sending to provider
const openai = createOpenAI({
apiKey: process.env.LOCKLLM_API_KEY,
proxyOptions: {
scanAction: 'block',
compressionAction: 'toon'
}
});
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: JSON.stringify(retrievedDocuments) }]
});
// Compact - ML-based compression with custom rate
const openaiCompact = createOpenAI({
apiKey: process.env.LOCKLLM_API_KEY,
proxyOptions: {
compressionAction: 'compact',
compressionRate: 0.4 // More aggressive compression
}
});
Available compression methods:
'toon'- JSON-to-compact notation (free). Converts structured JSON to a token-efficient format with 30-60% token savings. Non-JSON input is returned unchanged - it will not error or crash on free text.'compact'- ML-based compression ($0.0001/use). Works on any text type. Uses token-level classification to remove non-essential tokens while preserving meaning. Configurable compression rate (0.3-0.7, default 0.5).'combined'- TOON + Compact ($0.0001/use). Applies TOON first, then runs Compact on the result for maximum compression. For non-JSON input, only Compact runs. Best when you want maximum token reduction.- Not set /
null(default) - Compression is disabled.
Compression rate (compact and combined methods):
0.3- Most aggressive compression (removes more tokens)0.5- Balanced compression (default)0.7- Conservative compression (preserves more tokens)
Compression is disabled by default. Set compressionAction to enable it. Security scanning always runs on the original uncompressed text. In proxy mode, compression is applied after all security checks pass.
See the Prompt Compression page for detailed documentation.
Link to section: Response CachingResponse Caching
Reduce costs by caching identical LLM responses. Response caching is enabled by default in proxy mode.
const openai = createOpenAI({
apiKey: process.env.LOCKLLM_API_KEY,
proxyOptions: {
cacheResponse: true, // Enabled by default
cacheTTL: 3600 // Cache for 1 hour (default)
}
});
// Disable caching for specific use cases
const openaiNoCache = createOpenAI({
apiKey: process.env.LOCKLLM_API_KEY,
proxyOptions: {
cacheResponse: false // Disable caching
}
});
Caching is automatically disabled for streaming requests.
Link to section: Proxy Response MetadataProxy Response Metadata
When using proxy mode, parse detailed metadata from response headers including scan results, routing decisions, cache status, and credit usage:
import { parseProxyMetadata, decodeDetailField } from '@lockllm/sdk';
// After making a proxy request with fetch or another HTTP client
const response = await fetch('https://api.lockllm.com/v1/proxy/openai/chat/completions', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.LOCKLLM_API_KEY}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }]
})
});
const metadata = parseProxyMetadata(response.headers);
console.log("Request ID:", metadata.request_id);
console.log("Safe:", metadata.safe);
console.log("Scan mode:", metadata.scan_mode);
console.log("Provider:", metadata.provider);
console.log("Model:", metadata.model);
// Cache information
if (metadata.cache_status === 'HIT') {
console.log("Cache hit! Age:", metadata.cache_age, "seconds");
console.log("Tokens saved:", metadata.tokens_saved);
console.log("Cost saved:", metadata.cost_saved);
}
// Routing information
if (metadata.routing?.enabled) {
console.log("Task type:", metadata.routing.task_type);
console.log("Complexity:", metadata.routing.complexity);
console.log("Selected model:", metadata.routing.selected_model);
}
// PII detection information
if (metadata.pii_detected?.detected) {
console.log("PII entity types:", metadata.pii_detected.entity_types);
console.log("PII entity count:", metadata.pii_detected.entity_count);
console.log("PII action applied:", metadata.pii_detected.action);
}
// Credit tracking
console.log("Credits deducted:", metadata.credits_deducted);
console.log("Balance after:", metadata.balance_after);
// Decode base64 detail fields if present
if (metadata.scan_warning?.detail) {
const details = decodeDetailField(metadata.scan_warning.detail);
console.log("Scan details:", details);
}
Link to section: Building Custom IntegrationsBuilding Custom Integrations
Use the utility functions to build custom integrations with LockLLM's proxy:
Link to section: getAllProxyURLsgetAllProxyURLs
Get proxy URLs for every supported provider:
import { getAllProxyURLs } from '@lockllm/sdk';
const urls = getAllProxyURLs();
// {
// openai: 'https://api.lockllm.com/v1/proxy/openai',
// anthropic: 'https://api.lockllm.com/v1/proxy/anthropic',
// gemini: 'https://api.lockllm.com/v1/proxy/gemini',
// groq: 'https://api.lockllm.com/v1/proxy/groq',
// ... all 17 providers
// }
// Use with any HTTP client
const response = await fetch(`${urls.openai}/chat/completions`, {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.LOCKLLM_API_KEY}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }]
})
});
Link to section: buildLockLLMHeadersbuildLockLLMHeaders
Build LockLLM security headers for custom HTTP requests:
import { buildLockLLMHeaders } from '@lockllm/sdk';
const headers = buildLockLLMHeaders({
scanMode: 'combined',
scanAction: 'block',
policyAction: 'block',
abuseAction: 'allow_with_warning',
piiAction: 'strip',
routeAction: 'auto',
sensitivity: 'high',
cacheResponse: false, // Disable response caching (enabled by default)
cacheTTL: 1800 // Only applies when caching is enabled
});
// Use with fetch, axios, or any HTTP client
const response = await fetch('https://api.lockllm.com/v1/proxy/openai/chat/completions', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.LOCKLLM_API_KEY}`,
'Content-Type': 'application/json',
...headers // Spread LockLLM security headers
},
body: JSON.stringify({
model: 'gpt-4',
messages: [{ role: 'user', content: userInput }]
})
});
This is useful when integrating with HTTP clients like axios, got, or node-fetch instead of using the wrapper functions.
Link to section: Function CallingFunction Calling
OpenAI function calling works seamlessly:
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: "What's the weather in Boston?" }],
functions: [{
name: "get_weather",
description: "Get the current weather in a location",
parameters: {
type: "object",
properties: {
location: { type: "string", description: "City name" },
unit: { type: "string", enum: ["celsius", "fahrenheit"] }
},
required: ["location"]
}
}],
function_call: "auto"
});
if (response.choices[0].message.function_call) {
const functionCall = response.choices[0].message.function_call;
const args = JSON.parse(functionCall.arguments);
// Call your function with the parsed arguments
const weather = await getWeather(args.location, args.unit);
// Send function result back to LLM
const finalResponse = await openai.chat.completions.create({
model: "gpt-4",
messages: [
{ role: "user", content: "What's the weather in Boston?" },
response.choices[0].message,
{ role: "function", name: "get_weather", content: JSON.stringify(weather) }
]
});
}
Link to section: Multi-Turn ConversationsMulti-Turn Conversations
Maintain conversation context with message history:
const messages = [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "What is the capital of France?" }
];
let response = await openai.chat.completions.create({
model: "gpt-4",
messages
});
// Add assistant response to history
messages.push({
role: "assistant",
content: response.choices[0].message.content
});
// Continue conversation
messages.push({
role: "user",
content: "What is its population?"
});
response = await openai.chat.completions.create({
model: "gpt-4",
messages
});
Link to section: Express.js MiddlewareExpress.js Middleware
Integrate with Express.js for automatic request scanning:
import express from 'express';
import { LockLLM, PolicyViolationError } from '@lockllm/sdk';
const app = express();
const lockllm = new LockLLM({
apiKey: process.env.LOCKLLM_API_KEY
});
app.use(express.json());
// Middleware to scan all request bodies
app.use(async (req, res, next) => {
if (req.body && req.body.prompt) {
try {
const result = await lockllm.scan(
{
input: req.body.prompt,
sensitivity: "medium",
mode: "combined"
},
{
scanAction: "block",
policyAction: "block"
}
);
if (!result.safe) {
return res.status(400).json({
error: "Security issue detected",
request_id: result.request_id
});
}
// Attach scan result to request for logging
req.scanResult = result;
next();
} catch (error) {
if (error instanceof PolicyViolationError) {
return res.status(403).json({
error: "Content policy violated"
});
}
next(error);
}
} else {
next();
}
});
// Your routes
app.post('/api/chat', async (req, res) => {
// Request body already scanned by middleware
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: req.body.prompt }]
});
res.json(response);
});
app.listen(3000);
Link to section: Next.js IntegrationNext.js Integration
Link to section: App Router (Server Actions)App Router (Server Actions)
Server actions serialize errors across the network boundary, so instanceof checks are not available on the client side. Use error.code string checks instead:
// app/actions.ts
'use server';
import { createOpenAI } from '@lockllm/sdk/wrappers';
import {
PromptInjectionError,
PolicyViolationError,
PIIDetectedError,
InsufficientCreditsError,
} from '@lockllm/sdk';
const openai = createOpenAI({
apiKey: process.env.LOCKLLM_API_KEY!
});
export async function chatAction(userMessage: string) {
try {
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: userMessage }]
});
return {
success: true,
message: response.choices[0].message.content
};
} catch (error) {
if (error instanceof PromptInjectionError) {
return { success: false, error: "Invalid input detected" };
}
if (error instanceof PolicyViolationError) {
return { success: false, error: "Content policy violated" };
}
if (error instanceof InsufficientCreditsError) {
return { success: false, error: "Insufficient credits" };
}
if (error instanceof PIIDetectedError) {
return { success: false, error: "Personal information detected in input" };
}
throw error;
}
}
Link to section: API RoutesAPI Routes
// app/api/chat/route.ts
import { NextRequest, NextResponse } from 'next/server';
import { createOpenAI } from '@lockllm/sdk/wrappers';
import {
PromptInjectionError,
PolicyViolationError,
PIIDetectedError,
InsufficientCreditsError,
} from '@lockllm/sdk';
const openai = createOpenAI({
apiKey: process.env.LOCKLLM_API_KEY!
});
export async function POST(request: NextRequest) {
const { message } = await request.json();
try {
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: message }]
});
return NextResponse.json({
message: response.choices[0].message.content
});
} catch (error) {
if (error instanceof PromptInjectionError) {
return NextResponse.json(
{ error: "Invalid input detected" },
{ status: 400 }
);
}
if (error instanceof PolicyViolationError) {
return NextResponse.json(
{ error: "Content policy violated" },
{ status: 403 }
);
}
if (error instanceof InsufficientCreditsError) {
return NextResponse.json(
{ error: "Insufficient credits" },
{ status: 402 }
);
}
if (error instanceof PIIDetectedError) {
return NextResponse.json(
{ error: "Personal information detected in input" },
{ status: 403 }
);
}
throw error;
}
}
Link to section: Browser UsageBrowser Usage
The SDK works in browsers with proper CORS configuration:
<!DOCTYPE html>
<html>
<head>
<title>LockLLM Browser Example</title>
</head>
<body>
<input type="text" id="prompt" placeholder="Enter your prompt">
<button onclick="sendMessage()">Send</button>
<div id="response"></div>
<script type="module">
import { LockLLM } from 'https://esm.sh/@lockllm/sdk';
const lockllm = new LockLLM({
apiKey: 'your-lockllm-api-key'
});
window.sendMessage = async function() {
const prompt = document.getElementById('prompt').value;
try {
const result = await lockllm.scan({
input: prompt,
sensitivity: "medium"
});
if (result.safe) {
document.getElementById('response').textContent = "Safe to proceed!";
} else {
document.getElementById('response').textContent = "Malicious input detected!";
}
} catch (error) {
console.error('Error:', error);
}
};
</script>
</body>
</html>
CORS Note: LockLLM API supports CORS for browser requests. API keys should be kept secure - consider proxying through your backend for production.
Link to section: Error HandlingError Handling
LockLLM provides typed errors for comprehensive error handling:
Link to section: Error TypesError Types
import {
LockLLMError, // Base error class
AuthenticationError, // 401 - Invalid API key
RateLimitError, // 429 - Rate limit exceeded
PromptInjectionError, // 400 - Malicious input detected
PolicyViolationError, // 403 - Custom policy violated
AbuseDetectedError, // 400 - AI abuse detected
PIIDetectedError, // 403 - PII detected in input
InsufficientCreditsError, // 402 - Balance too low
UpstreamError, // 502 - Provider API error
ConfigurationError, // 400 - Invalid configuration
NetworkError // 0 - Network/connection error
} from '@lockllm/sdk';
Link to section: Complete Error HandlingComplete Error Handling
import { createOpenAI } from '@lockllm/sdk/wrappers';
import {
PromptInjectionError,
PolicyViolationError,
AbuseDetectedError,
PIIDetectedError,
InsufficientCreditsError,
AuthenticationError,
RateLimitError,
UpstreamError,
ConfigurationError,
NetworkError
} from '@lockllm/sdk';
const openai = createOpenAI({
apiKey: process.env.LOCKLLM_API_KEY
});
try {
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: userInput }]
});
console.log(response.choices[0].message.content);
} catch (error) {
if (error instanceof PromptInjectionError) {
// Security threat detected
console.error("Malicious input detected!");
console.log("Injection score:", error.scanResult.injection);
console.log("Confidence:", error.scanResult.confidence);
console.log("Request ID:", error.requestId);
// Log security incident
await logSecurityEvent({
type: 'prompt_injection',
confidence: error.scanResult.confidence,
requestId: error.requestId,
timestamp: new Date(),
userInput: "[REDACTED]" // Never log actual malicious content
});
// Return user-friendly error
return {
error: "Your input could not be processed for security reasons."
};
} else if (error instanceof PolicyViolationError) {
// Custom policy violation
console.error("Content policy violated!");
console.log("Request ID:", error.requestId);
for (const policy of error.violated_policies) {
console.log("Policy:", policy.policy_name);
for (const category of policy.violated_categories) {
console.log(" Category:", category.name);
}
}
return {
error: "Your input violates our content policies."
};
} else if (error instanceof AbuseDetectedError) {
// AI abuse detected
console.error("Abuse detected!");
console.log("Confidence:", error.abuse_details.confidence);
console.log("Types:", error.abuse_details.abuse_types);
console.log("Recommendation:", error.abuse_details.recommendation);
return {
error: "This request has been flagged. Please try again later."
};
} else if (error instanceof PIIDetectedError) {
// Personal information detected in input
console.error("PII detected in input!");
console.log("Entity types:", error.pii_details.entity_types);
console.log("Entity count:", error.pii_details.entity_count);
return {
error: "Personal information detected. Please remove sensitive data and try again."
};
} else if (error instanceof InsufficientCreditsError) {
// Balance too low
console.error("Insufficient credits");
console.log("Current balance:", error.current_balance);
console.log("Estimated cost:", error.estimated_cost);
return {
error: "Insufficient credits. Please top up your balance."
};
} else if (error instanceof AuthenticationError) {
console.error("Invalid API key");
// Check your LOCKLLM_API_KEY environment variable
} else if (error instanceof RateLimitError) {
console.error("Rate limit exceeded");
console.log("Retry after (ms):", error.retryAfter);
// Wait and retry
if (error.retryAfter) {
await new Promise(resolve => setTimeout(resolve, error.retryAfter));
// Retry request...
}
} else if (error instanceof UpstreamError) {
console.error("Provider API error");
console.log("Provider:", error.provider);
console.log("Status:", error.upstreamStatus);
console.log("Message:", error.message);
// Handle provider-specific errors
if (error.provider === 'openai' && error.upstreamStatus === 429) {
// OpenAI rate limit
}
} else if (error instanceof ConfigurationError) {
console.error("Configuration error:", error.message);
// Check provider key is added in dashboard
} else if (error instanceof NetworkError) {
console.error("Network error:", error.message);
// Check internet connection, firewall, etc.
}
}
Link to section: Exponential BackoffExponential Backoff
Implement exponential backoff for transient errors:
async function callWithBackoff(fn: () => Promise<any>, maxRetries = 5) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
return await fn();
} catch (error) {
if (error instanceof RateLimitError ||
error instanceof NetworkError) {
const delay = Math.min(1000 * Math.pow(2, attempt), 30000);
console.log(`Retry attempt ${attempt + 1} after ${delay}ms`);
await new Promise(resolve => setTimeout(resolve, delay));
} else {
throw error;
}
}
}
throw new Error('Max retries exceeded');
}
// Usage
const response = await callWithBackoff(() =>
openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: userInput }]
})
);
Link to section: TypeScript SupportTypeScript Support
Full TypeScript support with comprehensive type definitions:
Link to section: Type DefinitionsType Definitions
All types can be imported from the main entry point:
import type {
// Core types
LockLLMConfig,
RequestOptions,
Provider,
ProviderName,
// Scan types
ScanRequest,
ScanResponse,
ScanOptions,
ScanResult,
Sensitivity,
ScanMode,
ScanAction,
RouteAction,
PIIAction,
PolicyViolation,
ScanWarning,
AbuseWarning,
PIIResult,
// Proxy types
ProxyRequestOptions,
ProxyResponseMetadata,
// Error data types (for advanced error handling)
LockLLMErrorData,
PromptInjectionErrorData,
PolicyViolationErrorData,
AbuseDetectedErrorData,
PIIDetectedErrorData,
InsufficientCreditsErrorData,
} from '@lockllm/sdk';
Wrapper configuration types are also available:
import type {
CreateOpenAIConfig,
CreateAnthropicConfig,
GenericClientConfig,
} from '@lockllm/sdk/wrappers';
Quick reference for key type shapes:
// Configuration
const config: LockLLMConfig = {
apiKey: string,
baseURL?: string,
timeout?: number,
maxRetries?: number
};
// Scan request
const request: ScanRequest = {
input: string,
sensitivity?: 'low' | 'medium' | 'high',
mode?: 'normal' | 'policy_only' | 'combined',
chunk?: boolean
};
// Scan response
const response: ScanResponse = {
safe: boolean,
label: 0 | 1,
confidence?: number,
injection?: number,
policy_confidence?: number,
sensitivity: 'low' | 'medium' | 'high',
request_id: string,
usage: {
requests: number,
input_chars: number
},
debug?: {
duration_ms: number,
inference_ms: number,
mode: 'single' | 'chunked'
},
policy_warnings?: PolicyViolation[],
scan_warning?: ScanWarning,
abuse_warnings?: AbuseWarning,
routing?: {
enabled: boolean,
task_type: string,
complexity: number,
selected_model?: string,
reasoning?: string,
estimated_cost?: number
},
pii_result?: PIIResult
};
See the API Reference section below for complete interface definitions.
Link to section: Type InferenceType Inference
TypeScript automatically infers types:
const lockllm = new LockLLM({ apiKey: '...' });
// Return type automatically inferred as Promise<ScanResponse>
const result = await lockllm.scan({
input: 'test',
sensitivity: 'medium'
});
// TypeScript knows these properties exist
console.log(result.safe); // boolean
console.log(result.injection); // number | undefined (not present in policy_only mode)
console.log(result.request_id); // string
Link to section: Generic TypesGeneric Types
Provider wrappers create real SDK instances. For the best IDE experience with full autocomplete and type checking, add explicit type annotations:
import { createOpenAI, createAnthropic } from '@lockllm/sdk/wrappers';
import type OpenAI from 'openai';
import type Anthropic from '@anthropic-ai/sdk';
// Add type annotations for full IDE autocomplete and type safety
const openai: OpenAI = createOpenAI({ apiKey: '...' });
const anthropic: Anthropic = createAnthropic({ apiKey: '...' });
// Full autocomplete for all methods
const response = await openai.chat.completions.create({
model: "gpt-4", // TypeScript validates model names
messages: [ // TypeScript validates message structure
{ role: "user", content: "Hello" }
]
});
Tip: Adding the type annotation (e.g., const openai: OpenAI = ...) enables full autocomplete and type checking from the provider's SDK types. Without the annotation, the client still works correctly but TypeScript treats it as untyped.
Link to section: IDE AutocompleteIDE Autocomplete
TypeScript provides full IDE autocomplete:
const lockllm = new LockLLM({ apiKey: '...' });
// IDE suggests: scan(...)
await lockllm.s // <-- autocomplete suggestions appear
// IDE suggests: input, sensitivity, mode, chunk
await lockllm.scan({ i // <-- autocomplete for properties
// IDE suggests: "low", "medium", "high"
await lockllm.scan({
input: "test",
sensitivity: "m // <-- autocomplete for sensitivity values
});
Link to section: API ReferenceAPI Reference
Link to section: ClassesClasses
Link to section: LockLLMLockLLM
Main client class for scanning prompts.
class LockLLM {
constructor(config: LockLLMConfig)
async scan(request: ScanRequest, options?: ScanOptions): Promise<ScanResponse>
getConfig(): Readonly<Required<LockLLMConfig>>
}
Link to section: InterfacesInterfaces
Link to section: LockLLMConfigLockLLMConfig
interface LockLLMConfig {
apiKey: string // Your LockLLM API key (required)
baseURL?: string // Custom API endpoint (optional)
timeout?: number // Request timeout in ms (default: 60000)
maxRetries?: number // Max retry attempts (default: 3)
}
Link to section: RequestOptionsRequestOptions
Base options for HTTP requests. Extended by ProxyRequestOptions.
interface RequestOptions {
headers?: Record<string, string> // Custom headers to include
timeout?: number // Request timeout in ms
signal?: AbortSignal // Abort signal for cancelling requests
}
Link to section: ScanRequestScanRequest
interface ScanRequest {
input: string // Text to scan (required)
sensitivity?: 'low' | 'medium' | 'high' // Detection level (default: 'medium')
mode?: 'normal' | 'policy_only' | 'combined' // Scan mode (default: 'combined')
chunk?: boolean // Force chunked analysis for large inputs
}
chunk parameter: When set to true, forces the input to be split into smaller segments for analysis. This is useful for very large prompts where single-pass analysis may be less effective. The system automatically determines appropriate chunk sizes. In most cases, you do not need to set this - the API handles large inputs automatically.
Link to section: ScanOptionsScanOptions
interface ScanOptions {
scanAction?: 'block' | 'allow_with_warning' // Core injection behavior (default: 'allow_with_warning')
policyAction?: 'block' | 'allow_with_warning' // Custom policy behavior (default: 'allow_with_warning')
abuseAction?: 'block' | 'allow_with_warning' | null // Abuse detection (default: null - disabled)
piiAction?: 'strip' | 'block' | 'allow_with_warning' | null // PII detection (default: null - disabled)
compressionAction?: 'toon' | 'compact' | 'combined' | null // Prompt compression (default: null - disabled)
compressionRate?: number // Compression rate for compact/combined (0.3-0.7, default: 0.5)
headers?: Record<string, string> // Custom headers
timeout?: number // Request timeout in ms
signal?: AbortSignal // Abort signal for cancellation
}
Link to section: ScanResponseScanResponse
interface ScanResponse {
request_id: string // Unique request identifier
safe: boolean // true if safe, false if malicious
label: 0 | 1 // 0=safe, 1=malicious
sensitivity: Sensitivity // Sensitivity level used
confidence?: number // Core injection confidence (not in policy_only mode)
injection?: number // Injection risk score (not in policy_only mode)
policy_confidence?: number // Policy check confidence (in policy_only/combined modes)
usage: {
requests: number // Inference requests used
input_chars: number // Characters processed
}
debug?: {
duration_ms: number // Total processing time
inference_ms: number // ML inference time
mode: 'single' | 'chunked'
}
policy_warnings?: PolicyViolation[] // Custom policy violations
scan_warning?: ScanWarning // Core injection warning details
abuse_warnings?: AbuseWarning // Abuse detection results
routing?: { // Routing metadata (when routing enabled)
enabled: boolean
task_type: string
complexity: number
selected_model?: string
reasoning?: string
estimated_cost?: number
}
pii_result?: PIIResult // PII detection results (when PII detection enabled)
compression_result?: CompressionResult // Compression results (when compression enabled)
}
Link to section: PolicyViolationPolicyViolation
interface PolicyViolation {
policy_name: string // Policy name
violated_categories: Array<{
name: string // Category name
}>
violation_details?: string // Specific violation details
}
Link to section: ScanWarningScanWarning
interface ScanWarning {
message: string // Warning message
injection_score: number // Injection score (0-100)
confidence: number // Confidence score (0-100)
label: 0 | 1 // Safety label
}
Link to section: AbuseWarningAbuseWarning
interface AbuseWarning {
detected: true // Always true when present
confidence: number // Overall confidence (0-100)
abuse_types: string[] // Types of abuse detected
indicators: {
bot_score: number // Bot-generated content score (0-100)
repetition_score: number // Repetition detection score (0-100)
resource_score: number // Resource exhaustion score (0-100)
pattern_score: number // Pattern analysis score (0-100)
}
recommendation?: string // Recommended mitigation action
}
Link to section: PIIResultPIIResult
interface PIIResult {
detected: boolean // Whether PII was detected
entity_types: string[] // Types of PII entities found (e.g., "Email", "Phone Number")
entity_count: number // Total number of PII entities found
redacted_input?: string // Redacted input text (only when piiAction is "strip")
}
Link to section: CompressionResultCompressionResult
interface CompressionResult {
method: 'toon' | 'compact' | 'combined' // Compression method used
compressed_input: string // The compressed text
original_length: number // Original text length in characters
compressed_length: number // Compressed text length in characters
compression_ratio: number // Ratio of compressed/original (0-1, lower = better)
}
Link to section: ScanResultScanResult
Detailed scan result data included in PromptInjectionError.scanResult when a prompt injection is blocked.
interface ScanResult {
safe: boolean // Whether the prompt is safe
label: 0 | 1 // 0=safe, 1=malicious
confidence: number // Detection confidence (0-100)
injection: number // Injection risk score (0-100)
sensitivity: 'low' | 'medium' | 'high' // Sensitivity level used
}
Link to section: ProxyRequestOptionsProxyRequestOptions
interface ProxyRequestOptions {
scanMode?: 'normal' | 'policy_only' | 'combined' // Scan mode (default: 'combined')
scanAction?: 'block' | 'allow_with_warning' // Core injection action
policyAction?: 'block' | 'allow_with_warning' // Policy violation action
abuseAction?: 'block' | 'allow_with_warning' | null // Abuse detection (null = disabled)
routeAction?: 'disabled' | 'auto' | 'custom' // Routing mode (default: 'disabled')
piiAction?: 'strip' | 'block' | 'allow_with_warning' | null // PII detection (null = disabled)
compressionAction?: 'toon' | 'compact' | 'combined' | null // Prompt compression (null = disabled)
compressionRate?: number // Compression rate for compact/combined (0.3-0.7, default: 0.5)
sensitivity?: 'low' | 'medium' | 'high' // Detection sensitivity (default: 'medium')
cacheResponse?: boolean // Response caching (default: true)
cacheTTL?: number // Cache TTL in seconds (default: 3600)
headers?: Record<string, string> // Custom headers
timeout?: number // Request timeout in ms
signal?: AbortSignal // Abort signal
}
Link to section: ProxyResponseMetadataProxyResponseMetadata
interface ProxyResponseMetadata {
request_id: string
scanned: boolean
safe: boolean
scan_mode: 'normal' | 'policy_only' | 'combined'
credits_mode: 'lockllm_credits' | 'byok'
provider: string
model?: string
sensitivity?: string
label?: number
blocked?: boolean
scan_warning?: {
injection_score: number
confidence: number
detail: string
}
policy_warnings?: {
count: number
confidence: number
detail: string
}
abuse_detected?: {
confidence: number
types: string
detail: string
}
pii_detected?: {
detected: boolean
entity_types: string
entity_count: number
action: string
}
compression?: {
method: string // "toon", "compact", or "combined"
applied: boolean // Whether compression was applied
ratio: number // Compression ratio (compressed/original)
}
routing?: {
enabled: boolean
task_type: string
complexity: number
selected_model: string
routing_reason: string
original_provider: string
original_model: string
estimated_savings: number
estimated_original_cost: number
estimated_routed_cost: number
estimated_input_tokens: number
estimated_output_tokens: number
routing_fee_reason: string
}
credits_reserved?: number
routing_fee_reserved?: number
credits_deducted?: number
balance_after?: number
cache_status?: 'HIT' | 'MISS'
cache_age?: number
tokens_saved?: number
cost_saved?: number
scan_detail?: any
policy_detail?: any
abuse_detail?: any
}
Link to section: GenericClientConfigGenericClientConfig
Configuration for all OpenAI-compatible provider wrappers (createGroq, createDeepSeek, createMistral, etc.) and the generic factory functions.
interface GenericClientConfig {
apiKey: string // Your LockLLM API key (required)
baseURL?: string // Override proxy URL for the provider
proxyOptions?: ProxyRequestOptions // Security and routing configuration
[key: string]: any // Provider-specific options passed through to the SDK
}
Link to section: CreateOpenAIConfigCreateOpenAIConfig
Configuration for the createOpenAI wrapper.
interface CreateOpenAIConfig {
apiKey: string // Your LockLLM API key (required)
baseURL?: string // Custom proxy URL (default: https://api.lockllm.com/v1/proxy/openai)
proxyOptions?: ProxyRequestOptions // Security and routing configuration
[key: string]: any // Other OpenAI client options passed through
}
Link to section: CreateAnthropicConfigCreateAnthropicConfig
Configuration for the createAnthropic wrapper.
interface CreateAnthropicConfig {
apiKey: string // Your LockLLM API key (required)
baseURL?: string // Custom proxy URL (default: https://api.lockllm.com/v1/proxy/anthropic)
proxyOptions?: ProxyRequestOptions // Security and routing configuration
[key: string]: any // Other Anthropic client options passed through
}
Link to section: FunctionsFunctions
Link to section: Wrapper FunctionsWrapper Functions
// Provider-specific wrappers (all accept GenericClientConfig with proxyOptions)
function createOpenAI(config: CreateOpenAIConfig): OpenAI
function createAnthropic(config: CreateAnthropicConfig): Anthropic
function createGroq(config: GenericClientConfig): OpenAI
function createDeepSeek(config: GenericClientConfig): OpenAI
function createPerplexity(config: GenericClientConfig): OpenAI
function createMistral(config: GenericClientConfig): OpenAI
function createOpenRouter(config: GenericClientConfig): OpenAI
function createTogether(config: GenericClientConfig): OpenAI
function createXAI(config: GenericClientConfig): OpenAI
function createFireworks(config: GenericClientConfig): OpenAI
function createAnyscale(config: GenericClientConfig): OpenAI
function createHuggingFace(config: GenericClientConfig): OpenAI
function createGemini(config: GenericClientConfig): OpenAI
function createCohere(config: GenericClientConfig): Cohere
function createAzure(config: GenericClientConfig): OpenAI
function createBedrock(config: GenericClientConfig): OpenAI
function createVertexAI(config: GenericClientConfig): OpenAI
// Generic factory functions
function createOpenAICompatible(provider: ProviderName, config: GenericClientConfig): OpenAI
function createClient<T = any>(provider: ProviderName, ClientConstructor: new (config: any) => T, config: GenericClientConfig): T
Note on return types: Wrapper functions return the actual provider SDK instance at runtime (e.g., an OpenAI client, an Anthropic client), but the TypeScript return type is untyped. For full IDE autocomplete and type checking, add an explicit type annotation when creating the client:
import type OpenAI from 'openai';
const openai: OpenAI = createOpenAI({ apiKey: '...' });
See the Generic Types section for more details.
Link to section: Utility FunctionsUtility Functions
// URL utilities
function getProxyURL(provider: ProviderName): string
function getAllProxyURLs(): Record<ProviderName, string>
function getUniversalProxyURL(): string
// Header and metadata utilities
function buildLockLLMHeaders(options?: ProxyRequestOptions): Record<string, string>
function parseProxyMetadata(headers: Headers | Record<string, string>): ProxyResponseMetadata
function decodeDetailField(detail: string): any
Link to section: Error ClassesError Classes
class LockLLMError extends Error {
type: string
code?: string
status?: number
requestId?: string
}
class AuthenticationError extends LockLLMError {} // 401
class RateLimitError extends LockLLMError { // 429
retryAfter?: number
}
class PromptInjectionError extends LockLLMError { // 400
scanResult: ScanResult
}
class PolicyViolationError extends LockLLMError { // 403
violated_policies: Array<{
policy_name: string
violated_categories: Array<{ name: string }>
violation_details?: string
}>
}
class AbuseDetectedError extends LockLLMError { // 400
abuse_details: {
confidence: number
abuse_types: string[]
indicators: {
bot_score: number
repetition_score: number
resource_score: number
pattern_score: number
}
recommendation?: string
details?: {
recommendation?: string
bot_indicators?: string[]
repetition_indicators?: string[]
resource_indicators?: string[]
pattern_indicators?: string[]
}
}
}
class PIIDetectedError extends LockLLMError { // 403
pii_details: {
entity_types: string[]
entity_count: number
}
}
class InsufficientCreditsError extends LockLLMError { // 402
current_balance: number
estimated_cost: number
}
class UpstreamError extends LockLLMError { // 502
provider?: string
upstreamStatus?: number
}
class ConfigurationError extends LockLLMError {} // 400
class NetworkError extends LockLLMError { // 0
cause?: Error
}
Link to section: Error Codes ReferenceError Codes Reference
When handling errors, you can check the error.code property for fine-grained error identification. This is especially useful in contexts where instanceof checks are not available (e.g., serialized server action responses).
| Error Class | error.code | HTTP Status | Description |
|---|---|---|---|
PromptInjectionError | prompt_injection_detected | 400 | Malicious prompt injection detected |
PolicyViolationError | policy_violation | 403 | Custom content policy violated |
PIIDetectedError | pii_detected | 403 | Personal information detected in input |
AbuseDetectedError | abuse_detected | 400 | AI abuse pattern detected |
InsufficientCreditsError | insufficient_credits | 402 | Not enough credits for the request |
InsufficientCreditsError | insufficient_routing_credits | 402 | Not enough credits for routing fee |
InsufficientCreditsError | no_balance | 402 | No balance record found |
AuthenticationError | unauthorized | 401 | Invalid or missing API key |
RateLimitError | rate_limited | 429 | Request rate limit exceeded |
UpstreamError | provider_error | 502 | AI provider returned an error |
ConfigurationError | invalid_config | 400 | Invalid SDK or client configuration |
ConfigurationError | no_upstream_key | 400 | No provider API key configured for this provider |
ConfigurationError | no_byok_key | 400 | BYOK key required but not configured |
NetworkError | connection_failed | 0 | Network request failed |
Link to section: Error Data TypesError Data Types
These types are used for constructing error data objects and are useful for advanced error handling patterns:
interface LockLLMErrorData {
message: string // Error message
type: string // Error type identifier
code?: string // Specific error code
status?: number // HTTP status code
requestId?: string // Request ID for tracking
[key: string]: any // Additional properties from the API response
}
interface PromptInjectionErrorData extends LockLLMErrorData {
scanResult: ScanResult // Detailed scan results (safe, label, confidence, injection, sensitivity)
}
interface PolicyViolationErrorData extends LockLLMErrorData {
violated_policies: Array<{
policy_name: string
violated_categories: Array<{ name: string }>
violation_details?: string
}>
}
interface AbuseDetectedErrorData extends LockLLMErrorData {
abuse_details: {
confidence: number
abuse_types: string[]
indicators: {
bot_score: number
repetition_score: number
resource_score: number
pattern_score: number
}
recommendation?: string
details?: {
recommendation?: string
bot_indicators?: string[]
repetition_indicators?: string[]
resource_indicators?: string[]
pattern_indicators?: string[]
}
}
}
interface PIIDetectedErrorData extends LockLLMErrorData {
pii_details: {
entity_types: string[] // Types of PII entities found
entity_count: number // Number of PII entities found
}
}
interface InsufficientCreditsErrorData extends LockLLMErrorData {
current_balance: number // Current credit balance
estimated_cost: number // Estimated cost of the request
}
Link to section: Best PracticesBest Practices
Link to section: SecuritySecurity
- Never hardcode API keys - Use environment variables or secure vaults
- Log security incidents - Track blocked requests with request IDs
- Set appropriate sensitivity - Balance security needs with false positives
- Handle errors gracefully - Provide user-friendly error messages
- Monitor request patterns - Watch for attack trends in dashboard
- Rotate keys regularly - Update API keys periodically
- Use HTTPS only - Never send API keys over unencrypted connections
- Enable abuse detection - Opt in with
abuseAction: 'block'for endpoints facing public users - Use custom policies - Define content rules specific to your application in the dashboard
- Handle policy violations - Catch
PolicyViolationErrorseparately for user-friendly messaging
Link to section: PerformancePerformance
- Use wrapper functions - Most efficient integration method
- Enable response caching - Cache is on by default; adjust TTL based on content freshness needs
- Set reasonable timeouts - Balance user experience with reliability
- Monitor latency - Track P50, P95, P99 in production
- Use connection pooling - SDK handles this automatically
- Use smart routing - Set
routeAction: 'auto'to optimize cost vs. quality per request - Monitor credit usage - Check
ProxyResponseMetadatafor credits deducted and balance
Link to section: Production DeploymentProduction Deployment
- Test sensitivity levels - Validate with real user data
- Implement monitoring - Track blocked requests and false positives
- Set up alerting - Get notified of security incidents
- Review logs regularly - Analyze attack patterns
- Keep SDK updated - Benefit from latest improvements
- Document incidents - Maintain security incident log
- Load test - Verify performance under expected load
Link to section: Migration GuidesMigration Guides
Link to section: From Direct API IntegrationFrom Direct API Integration
If you're currently calling LLM APIs directly:
// Before: Direct OpenAI API call
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
});
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: userInput }]
});
// After: With LockLLM security (one line change)
import { createOpenAI } from '@lockllm/sdk/wrappers';
const openai = createOpenAI({
apiKey: process.env.LOCKLLM_API_KEY // Only change
});
// Everything else stays the same
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: userInput }]
});
Link to section: From OpenAI SDKFrom OpenAI SDK
Minimal changes required:
// Before
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
});
// After
import { createOpenAI } from '@lockllm/sdk/wrappers';
const openai = createOpenAI({
apiKey: process.env.LOCKLLM_API_KEY // Use LockLLM key
});
// All other code remains unchanged
Link to section: From Anthropic SDKFrom Anthropic SDK
Minimal changes required:
// Before
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY
});
// After
import { createAnthropic } from '@lockllm/sdk/wrappers';
const anthropic = createAnthropic({
apiKey: process.env.LOCKLLM_API_KEY // Use LockLLM key
});
// All other code remains unchanged
Link to section: TroubleshootingTroubleshooting
Link to section: Common IssuesCommon Issues
"Invalid API key" error (401)
- Verify your LockLLM API key is correct
- Check the key hasn't been revoked in the dashboard
- Ensure you're using your LockLLM key, not your provider key
"No provider API key configured" error (400)
- Add your provider API key (OpenAI, Anthropic, etc.) in the dashboard
- Navigate to Proxy Settings and configure provider keys
- Ensure the provider key is enabled (toggle switch on)
"Could not extract prompt from request" error (400)
- Verify request body format matches provider API spec
- Check you're using the correct SDK version
- Ensure messages array is properly formatted
"Policy violation" error (403)
- Review which custom policies are enabled in the dashboard
- Check if the input triggers any of your custom content policies
- Use
policyAction: 'allow_with_warning'instead of'block'to get warnings without blocking - Inspect the
violated_policiesarray onPolicyViolationErrorfor details
"Insufficient credits" error (402)
- Check your credit balance in the dashboard billing page
- The
InsufficientCreditsErrorincludescurrent_balanceandestimated_cost - Top up your balance or switch to BYOK mode to use your own provider keys
"Abuse detected" error (400)
- This only occurs when
abuseActionis set to'block' - Review the
abuse_detailsonAbuseDetectedErrorfor specific indicators - Adjust your abuse detection settings if you see false positives
- Set
abuseAction: 'allow_with_warning'to get warnings instead of blocks
"PII detected" error (403)
- This only occurs when
piiActionis set to'block' - The
PIIDetectedErrorincludespii_details.entity_typesandpii_details.entity_count - Switch to
piiAction: 'strip'to automatically redact PII instead of blocking - Switch to
piiAction: 'allow_with_warning'to detect and report PII without blocking - Review the detected entity types to determine if they are expected for your use case
High latency
- Check your network connection
- Verify LockLLM API status at status.lockllm.com
- Consider adjusting timeout settings
- Review provider API latency
TypeScript errors
- Ensure TypeScript 4.5+ is installed
- Check peer dependencies are installed (openai, @anthropic-ai/sdk)
- Run
npm install --save-dev @types/node - Verify tsconfig.json includes proper settings
Link to section: Debugging TipsDebugging Tips
Enable verbose logging:
const lockllm = new LockLLM({
apiKey: process.env.LOCKLLM_API_KEY
});
try {
const result = await lockllm.scan({
input: userPrompt
});
console.log('Scan result:', {
safe: result.safe,
injection: result.injection,
confidence: result.confidence,
request_id: result.request_id
});
} catch (error) {
console.error('Error details:', {
type: error.type,
code: error.code,
message: error.message,
requestId: error.requestId,
status: error.status
});
}
Link to section: Getting HelpGetting Help
- Documentation: https://www.lockllm.com/docs
- GitHub Issues: https://github.com/lockllm/lockllm-npm/issues
- Email Support: [email protected]
- Status Page: status.lockllm.com
Link to section: FAQFAQ
Link to section: How do I install the SDK?How do I install the SDK?
Install using npm, yarn, pnpm, or bun:
npm install @lockllm/sdk
yarn add @lockllm/sdk
pnpm add @lockllm/sdk
bun add @lockllm/sdk
The SDK requires Node.js 14+ and works in Node.js, browsers, and Edge runtimes.
Link to section: Does the SDK work as a drop-in replacement for OpenAI and Anthropic?Does the SDK work as a drop-in replacement for OpenAI and Anthropic?
Yes. Use createOpenAI() or createAnthropic() to get wrapped clients that work exactly like the official SDKs. All methods, streaming, and function calling are supported. Prompts are automatically scanned before being sent to the provider.
Link to section: What TypeScript support is available?What TypeScript support is available?
The SDK is TypeScript-first with comprehensive type definitions. It includes full type safety, generics, type inference, and IDE autocomplete for all APIs. TypeScript 4.5+ is supported.
Link to section: Which AI providers are supported?Which AI providers are supported?
17+ providers are supported: OpenAI, Anthropic, Groq, DeepSeek, Perplexity, Mistral, OpenRouter, Together AI, xAI (Grok), Fireworks AI, Anyscale, Hugging Face, Google Gemini, Cohere, Azure OpenAI, AWS Bedrock, and Google Vertex AI. All providers support custom endpoint URLs for self-hosted and private deployments.
Link to section: How do I handle errors?How do I handle errors?
The SDK provides 11 typed error classes: AuthenticationError, RateLimitError, PromptInjectionError, PolicyViolationError, AbuseDetectedError, PIIDetectedError, InsufficientCreditsError, UpstreamError, ConfigurationError, NetworkError, and base LockLLMError. Use try-catch blocks with instanceof checks for proper error handling. See the Error Handling section for comprehensive examples.
Link to section: Does the SDK support streaming?Does the SDK support streaming?
Yes. All provider wrappers fully support streaming responses. Use stream: true in your requests and iterate with for await...of loops.
Link to section: Is it really free?Is it really free?
Scanning is free with unlimited usage. For proxy mode, you bring your own provider API keys (BYOK model) and pay your provider directly. Optional paid features include smart routing (5% of cost savings) and detection fees ($0.0001 per detected threat). See the pricing page for full details.
Link to section: How do I enforce custom content policies?How do I enforce custom content policies?
Create custom policies in the LockLLM dashboard. Once defined, they are automatically checked when using combined or policy_only scan mode. Set policyAction: 'block' to block violating content, or 'allow_with_warning' to get warnings without blocking.
Link to section: How does abuse detection work?How does abuse detection work?
Abuse detection is opt-in and disabled by default. Enable it by setting abuseAction: 'block' or 'allow_with_warning' in either your proxyOptions or ScanOptions. It detects bot-generated content, repetitive prompts, and resource exhaustion attempts. See the AI Abuse Detection section for details.
Link to section: What is smart routing?What is smart routing?
Smart routing automatically selects the best model for each request based on task type and complexity. Enable it with routeAction: 'auto' in proxyOptions, or use routeAction: 'custom' with your own rules from the dashboard. Routing can reduce costs by matching simpler tasks to more efficient models.
Link to section: How does response caching work?How does response caching work?
Response caching is enabled by default in proxy mode with a 1-hour TTL. Identical requests return cached responses to save costs and reduce latency. Disable it with cacheResponse: false or customize the TTL with cacheTTL. Streaming requests are never cached.
Link to section: What is the universal proxy?What is the universal proxy?
The universal proxy (getUniversalProxyURL()) lets you access 200+ models from all providers without configuring individual provider API keys. It uses LockLLM credits instead of BYOK. Useful for rapid prototyping or when you want a single billing point.
Link to section: What is the difference between scan modes?What is the difference between scan modes?
normalchecks for core injection attacks onlypolicy_onlychecks against your custom content policies onlycombined(default) checks both core security and custom policies
Link to section: What is PII detection and redaction?What is PII detection and redaction?
PII (Personally Identifiable Information) detection is an opt-in feature that scans prompts for sensitive personal data like names, email addresses, phone numbers, social security numbers, and credit card numbers. Enable it with piiAction: 'strip' to automatically redact PII before it reaches the AI provider, 'block' to reject requests containing PII, or 'allow_with_warning' to detect and report PII without blocking.
Link to section: How do I enable PII detection?How do I enable PII detection?
Set piiAction in your proxyOptions (for wrappers) or ScanOptions (for the scan API). For example: proxyOptions: { piiAction: 'strip' }. PII detection is disabled by default and must be explicitly opted into. Three modes are available: 'strip' replaces PII with placeholders, 'block' rejects the request, and 'allow_with_warning' reports detected PII without blocking.
Link to section: GitHub RepositoryGitHub Repository
View the source code, report issues, and contribute:
Repository: https://github.com/lockllm/lockllm-npm
NPM Package: https://www.npmjs.com/package/@lockllm/sdk
Link to section: Next StepsNext Steps
- View JavaScript/TypeScript SDK integration page for more examples
- Read API Reference for REST API details
- Explore Best Practices for production deployments
- Check out Proxy Mode for alternative integration
- Configure Webhooks for security alerts
- Browse Dashboard documentation