JavaScript/TypeScript SDK
Native JavaScript/TypeScript SDK for prompt security with drop-in replacements for OpenAI, Anthropic, and 15 other providers.
Link to section: IntroductionIntroduction
The LockLLM JavaScript/TypeScript SDK is a production-ready library that provides comprehensive AI security for your LLM applications. Built with TypeScript and designed for modern JavaScript environments, it offers drop-in replacements for popular AI provider SDKs with automatic prompt injection detection and jailbreak prevention.
Key features:
- Real-time security scanning with minimal latency (<250ms)
- Drop-in replacements for 17+ AI providers (custom endpoint support for each)
- Full TypeScript support with comprehensive type definitions
- Zero external dependencies
- Works in Node.js, browsers, and Edge runtimes
- ESM and CommonJS support
- Streaming-compatible with all providers
- Completely free with unlimited usage
Use cases:
- Production LLM applications requiring security
- AI agents and autonomous systems
- Chatbots and conversational interfaces
- RAG (Retrieval Augmented Generation) systems
- Multi-tenant AI applications
- Enterprise AI deployments
Link to section: InstallationInstallation
Install the SDK using your preferred package manager:
# npm
npm install @lockllm/sdk
# yarn
yarn add @lockllm/sdk
# pnpm (recommended - faster, saves disk space)
pnpm add @lockllm/sdk
# bun
bun add @lockllm/sdk
Requirements:
- Node.js 14.0 or higher
- TypeScript 4.5 or higher (for TypeScript projects)
Link to section: Peer DependenciesPeer Dependencies
For provider wrapper functions, install the relevant official SDKs:
# For OpenAI and OpenAI-compatible providers
npm install openai
# For Anthropic Claude
npm install @anthropic-ai/sdk
# For Cohere
npm install cohere-ai
Provider SDK mapping:
openai- OpenAI, Groq, DeepSeek, Mistral, Perplexity, OpenRouter, Together AI, xAI, Fireworks, Anyscale, Hugging Face, Gemini, Azure, Bedrock, Vertex AI@anthropic-ai/sdk- Anthropic Claudecohere-ai- Cohere (optional)
Peer dependencies are only required if you use the wrapper functions for those providers.
Link to section: Quick StartQuick Start
Link to section: Step 1: Get Your API KeysStep 1: Get Your API Keys
- Visit lockllm.com and create a free account
- Navigate to API Keys section and copy your LockLLM API key
- Go to Proxy Settings and add your provider API keys (OpenAI, Anthropic, etc.)
Your provider keys are encrypted and stored securely. You'll only need your LockLLM API key in your code.
Link to section: Step 2: Basic UsageStep 2: Basic Usage
Choose from three integration methods: wrapper functions (easiest), direct scan API, or official SDKs with custom baseURL.
Link to section: Wrapper Functions (Recommended)Wrapper Functions (Recommended)
The simplest way to add security - replace your SDK initialization:
import { createOpenAI } from '@lockllm/sdk/wrappers';
// Before:
// const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
// After (one line change):
const openai = createOpenAI({
apiKey: process.env.LOCKLLM_API_KEY
});
// Everything else works exactly the same
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: userInput }]
});
console.log(response.choices[0].message.content);
Link to section: Direct Scan APIDirect Scan API
For manual control and custom workflows:
import { LockLLM } from '@lockllm/sdk';
const lockllm = new LockLLM({
apiKey: process.env.LOCKLLM_API_KEY
});
// Scan user input before processing
const result = await lockllm.scan({
input: userPrompt,
sensitivity: "medium" // "low" | "medium" | "high"
});
if (!result.safe) {
console.log("Malicious input detected!");
console.log("Injection score:", result.injection);
console.log("Confidence:", result.confidence);
console.log("Request ID:", result.request_id);
// Handle security incident
return { error: "Invalid input detected" };
}
// Safe to proceed
const response = await yourLLMCall(userPrompt);
Link to section: Official SDKs with ProxyOfficial SDKs with Proxy
Use official SDKs with LockLLM's proxy:
import OpenAI from 'openai';
import { getProxyURL } from '@lockllm/sdk';
const client = new OpenAI({
apiKey: process.env.LOCKLLM_API_KEY,
baseURL: getProxyURL('openai')
});
// Works exactly like the official SDK
const response = await client.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: "Hello!" }]
});
Link to section: Provider WrappersProvider Wrappers
LockLLM provides drop-in replacements for 17+ AI providers with custom endpoint support. All wrappers work identically to the official SDKs with automatic security scanning.
Link to section: OpenAIOpenAI
import { createOpenAI } from '@lockllm/sdk/wrappers';
const openai = createOpenAI({
apiKey: process.env.LOCKLLM_API_KEY
});
// Chat completions
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: userInput }
],
temperature: 0.7,
max_tokens: 1000
});
// Streaming
const stream = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: "Count from 1 to 10" }],
stream: true
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}
// Function calling
const functionResponse = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: "What's the weather in Boston?" }],
functions: [{
name: "get_weather",
description: "Get the current weather in a location",
parameters: {
type: "object",
properties: {
location: { type: "string", description: "City name" },
unit: { type: "string", enum: ["celsius", "fahrenheit"] }
},
required: ["location"]
}
}]
});
Link to section: Anthropic ClaudeAnthropic Claude
import { createAnthropic } from '@lockllm/sdk/wrappers';
const anthropic = createAnthropic({
apiKey: process.env.LOCKLLM_API_KEY
});
// Messages API
const message = await anthropic.messages.create({
model: "claude-3-5-sonnet-20241022",
max_tokens: 1024,
messages: [
{ role: "user", content: userInput }
]
});
console.log(message.content[0].text);
// Streaming
const stream = await anthropic.messages.create({
model: "claude-3-5-sonnet-20241022",
max_tokens: 1024,
messages: [{ role: "user", content: "Write a poem" }],
stream: true
});
for await (const event of stream) {
if (event.type === 'content_block_delta' &&
event.delta.type === 'text_delta') {
process.stdout.write(event.delta.text);
}
}
Link to section: Groq (Fast Inference)Groq (Fast Inference)
import { createGroq } from '@lockllm/sdk/wrappers';
const groq = createGroq({
apiKey: process.env.LOCKLLM_API_KEY
});
const response = await groq.chat.completions.create({
model: 'llama-3.1-70b-versatile',
messages: [{ role: 'user', content: userInput }]
});
Link to section: DeepSeekDeepSeek
import { createDeepSeek } from '@lockllm/sdk/wrappers';
const deepseek = createDeepSeek({
apiKey: process.env.LOCKLLM_API_KEY
});
const response = await deepseek.chat.completions.create({
model: 'deepseek-chat',
messages: [{ role: 'user', content: userInput }]
});
Link to section: PerplexityPerplexity
import { createPerplexity } from '@lockllm/sdk/wrappers';
const perplexity = createPerplexity({
apiKey: process.env.LOCKLLM_API_KEY
});
const response = await perplexity.chat.completions.create({
model: 'llama-3.1-sonar-huge-128k-online',
messages: [{ role: 'user', content: userInput }]
});
Link to section: All Supported ProvidersAll Supported Providers
Import any wrapper function from @lockllm/sdk/wrappers:
import {
createOpenAI, // OpenAI GPT models
createAnthropic, // Anthropic Claude
createGroq, // Groq LPU inference
createDeepSeek, // DeepSeek models
createPerplexity, // Perplexity (with internet)
createMistral, // Mistral AI
createOpenRouter, // OpenRouter (multi-provider)
createTogether, // Together AI
createXAI, // xAI Grok
createFireworks, // Fireworks AI
createAnyscale, // Anyscale Endpoints
createHuggingFace, // Hugging Face Inference
createGemini, // Google Gemini
createCohere, // Cohere
createAzure, // Azure OpenAI
createBedrock, // AWS Bedrock
createVertexAI // Google Vertex AI
} from '@lockllm/sdk/wrappers';
Provider compatibility:
- 15 providers use OpenAI-compatible API (require
openaipackage) - Anthropic uses its own SDK (requires
@anthropic-ai/sdk) - Cohere uses its own SDK (requires
cohere-ai, optional) - All providers support custom endpoint URLs via dashboard
Link to section: ConfigurationConfiguration
Link to section: LockLLM Client ConfigurationLockLLM Client Configuration
import { LockLLM, LockLLMConfig } from '@lockllm/sdk';
const config: LockLLMConfig = {
apiKey: process.env.LOCKLLM_API_KEY, // Required
baseURL: "https://api.lockllm.com", // Optional: custom endpoint
timeout: 60000, // Optional: request timeout (ms)
maxRetries: 3 // Optional: max retry attempts
};
const lockllm = new LockLLM(config);
Link to section: Sensitivity LevelsSensitivity Levels
Control detection strictness with the sensitivity parameter:
// Low sensitivity - fewer false positives (threshold: 40)
// Use for: creative applications, exploratory use cases
const lowResult = await lockllm.scan({
input: userPrompt,
sensitivity: "low"
});
// Medium sensitivity - balanced detection (threshold: 25) - DEFAULT
// Use for: general user inputs, standard applications
const mediumResult = await lockllm.scan({
input: userPrompt,
sensitivity: "medium"
});
// High sensitivity - maximum protection (threshold: 10)
// Use for: sensitive operations, admin panels, data exports
const highResult = await lockllm.scan({
input: userPrompt,
sensitivity: "high"
});
Choosing sensitivity:
- High: Critical systems (admin, payments, sensitive data)
- Medium: General applications (default, recommended)
- Low: Creative tools (writing assistants, brainstorming)
Link to section: Custom EndpointsCustom Endpoints
All providers support custom endpoint URLs for:
- Self-hosted LLM deployments (OpenAI-compatible APIs)
- Azure OpenAI resources with custom endpoints
- Alternative API gateways and reverse proxies
- Private cloud or air-gapped deployments
- Development and staging environments
How it works: Configure custom endpoints in the LockLLM dashboard when adding any provider API key. The SDK wrappers automatically use your custom endpoint URL.
// The wrapper automatically uses your custom endpoint
const azure = createAzure({
apiKey: process.env.LOCKLLM_API_KEY
});
// Your custom Azure endpoint is configured in the dashboard:
// - Endpoint: https://your-resource.openai.azure.com
// - Deployment: gpt-4
// - API Version: 2024-10-21
Example - Self-hosted model: If you have a self-hosted model with an OpenAI-compatible API, configure it in the dashboard using one of the OpenAI-compatible provider wrappers (e.g., OpenAI, Groq) with your custom endpoint URL.
// Use OpenAI wrapper with custom endpoint configured in dashboard
const openai = createOpenAI({
apiKey: process.env.LOCKLLM_API_KEY
});
// Dashboard configuration:
// - Provider: OpenAI
// - Custom Endpoint: https://your-self-hosted-llm.com/v1
// - API Key: your-model-api-key
Link to section: Request OptionsRequest Options
Override configuration per-request:
// Per-request timeout
const result = await lockllm.scan({
input: userPrompt,
sensitivity: "high"
}, {
timeout: 30000 // 30 second timeout for this request
});
// Wrapper functions support all official SDK options
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: userInput }],
// ... any OpenAI options
}, {
timeout: 30000 // Custom timeout
});
Link to section: Advanced FeaturesAdvanced Features
Link to section: Streaming ResponsesStreaming Responses
All provider wrappers support streaming:
// OpenAI streaming
const stream = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: "Write a story" }],
stream: true
});
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content;
if (content) {
process.stdout.write(content);
}
}
// Anthropic streaming
const anthropicStream = await anthropic.messages.create({
model: "claude-3-5-sonnet-20241022",
max_tokens: 1024,
messages: [{ role: "user", content: "Write a story" }],
stream: true
});
for await (const event of anthropicStream) {
if (event.type === 'content_block_delta') {
process.stdout.write(event.delta.text);
}
}
Link to section: Function CallingFunction Calling
OpenAI function calling works seamlessly:
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: "What's the weather in Boston?" }],
functions: [{
name: "get_weather",
description: "Get the current weather in a location",
parameters: {
type: "object",
properties: {
location: { type: "string", description: "City name" },
unit: { type: "string", enum: ["celsius", "fahrenheit"] }
},
required: ["location"]
}
}],
function_call: "auto"
});
if (response.choices[0].message.function_call) {
const functionCall = response.choices[0].message.function_call;
const args = JSON.parse(functionCall.arguments);
// Call your function with the parsed arguments
const weather = await getWeather(args.location, args.unit);
// Send function result back to LLM
const finalResponse = await openai.chat.completions.create({
model: "gpt-4",
messages: [
{ role: "user", content: "What's the weather in Boston?" },
response.choices[0].message,
{ role: "function", name: "get_weather", content: JSON.stringify(weather) }
]
});
}
Link to section: Multi-Turn ConversationsMulti-Turn Conversations
Maintain conversation context with message history:
const messages = [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "What is the capital of France?" }
];
let response = await openai.chat.completions.create({
model: "gpt-4",
messages
});
// Add assistant response to history
messages.push({
role: "assistant",
content: response.choices[0].message.content
});
// Continue conversation
messages.push({
role: "user",
content: "What is its population?"
});
response = await openai.chat.completions.create({
model: "gpt-4",
messages
});
Link to section: Express.js MiddlewareExpress.js Middleware
Integrate with Express.js for automatic request scanning:
import express from 'express';
import { LockLLM } from '@lockllm/sdk';
const app = express();
const lockllm = new LockLLM({
apiKey: process.env.LOCKLLM_API_KEY
});
app.use(express.json());
// Middleware to scan all request bodies
app.use(async (req, res, next) => {
if (req.body && req.body.prompt) {
try {
const result = await lockllm.scan({
input: req.body.prompt,
sensitivity: "medium"
});
if (!result.safe) {
return res.status(400).json({
error: "Malicious input detected",
details: {
injection: result.injection,
confidence: result.confidence,
request_id: result.request_id
}
});
}
// Attach scan result to request for logging
req.scanResult = result;
next();
} catch (error) {
next(error);
}
} else {
next();
}
});
// Your routes
app.post('/api/chat', async (req, res) => {
// Request body already scanned by middleware
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: req.body.prompt }]
});
res.json(response);
});
app.listen(3000);
Link to section: Next.js IntegrationNext.js Integration
Link to section: App Router (Server Actions)App Router (Server Actions)
// app/actions.ts
'use server';
import { createOpenAI } from '@lockllm/sdk/wrappers';
const openai = createOpenAI({
apiKey: process.env.LOCKLLM_API_KEY!
});
export async function chatAction(userMessage: string) {
try {
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: userMessage }]
});
return {
success: true,
message: response.choices[0].message.content
};
} catch (error) {
if (error.code === 'prompt_injection_detected') {
return {
success: false,
error: "Invalid input detected"
};
}
throw error;
}
}
Link to section: API RoutesAPI Routes
// app/api/chat/route.ts
import { NextRequest, NextResponse } from 'next/server';
import { createOpenAI } from '@lockllm/sdk/wrappers';
const openai = createOpenAI({
apiKey: process.env.LOCKLLM_API_KEY!
});
export async function POST(request: NextRequest) {
const { message } = await request.json();
try {
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: message }]
});
return NextResponse.json({
message: response.choices[0].message.content
});
} catch (error) {
if (error.code === 'prompt_injection_detected') {
return NextResponse.json(
{ error: "Invalid input detected" },
{ status: 400 }
);
}
throw error;
}
}
Link to section: Browser UsageBrowser Usage
The SDK works in browsers with proper CORS configuration:
<!DOCTYPE html>
<html>
<head>
<title>LockLLM Browser Example</title>
</head>
<body>
<input type="text" id="prompt" placeholder="Enter your prompt">
<button onclick="sendMessage()">Send</button>
<div id="response"></div>
<script type="module">
import { LockLLM } from 'https://esm.sh/@lockllm/sdk';
const lockllm = new LockLLM({
apiKey: 'your-lockllm-api-key'
});
window.sendMessage = async function() {
const prompt = document.getElementById('prompt').value;
try {
const result = await lockllm.scan({
input: prompt,
sensitivity: "medium"
});
if (result.safe) {
document.getElementById('response').textContent = "Safe to proceed!";
} else {
document.getElementById('response').textContent = "Malicious input detected!";
}
} catch (error) {
console.error('Error:', error);
}
};
</script>
</body>
</html>
CORS Note: LockLLM API supports CORS for browser requests. API keys should be kept secure - consider proxying through your backend for production.
Link to section: Error HandlingError Handling
LockLLM provides typed errors for comprehensive error handling:
Link to section: Error TypesError Types
import {
LockLLMError, // Base error class
AuthenticationError, // 401 - Invalid API key
RateLimitError, // 429 - Rate limit exceeded
PromptInjectionError, // 400 - Malicious input detected
UpstreamError, // 502 - Provider API error
ConfigurationError, // 400 - Invalid configuration
NetworkError // 0 - Network/connection error
} from '@lockllm/sdk';
Link to section: Complete Error HandlingComplete Error Handling
import { createOpenAI } from '@lockllm/sdk/wrappers';
import {
PromptInjectionError,
AuthenticationError,
RateLimitError,
UpstreamError,
ConfigurationError,
NetworkError
} from '@lockllm/sdk';
const openai = createOpenAI({
apiKey: process.env.LOCKLLM_API_KEY
});
try {
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: userInput }]
});
console.log(response.choices[0].message.content);
} catch (error) {
if (error instanceof PromptInjectionError) {
// Security threat detected
console.error("Malicious input detected!");
console.log("Injection score:", error.scanResult.injection);
console.log("Confidence:", error.scanResult.confidence);
console.log("Request ID:", error.requestId);
// Log security incident
await logSecurityEvent({
type: 'prompt_injection',
confidence: error.scanResult.confidence,
requestId: error.requestId,
timestamp: new Date(),
userInput: "[REDACTED]" // Never log actual malicious content
});
// Return user-friendly error
return {
error: "Your input could not be processed for security reasons."
};
} else if (error instanceof AuthenticationError) {
console.error("Invalid API key");
// Check your LOCKLLM_API_KEY environment variable
} else if (error instanceof RateLimitError) {
console.error("Rate limit exceeded");
console.log("Retry after (ms):", error.retryAfter);
// Wait and retry
if (error.retryAfter) {
await new Promise(resolve => setTimeout(resolve, error.retryAfter));
// Retry request...
}
} else if (error instanceof UpstreamError) {
console.error("Provider API error");
console.log("Provider:", error.provider);
console.log("Status:", error.upstreamStatus);
console.log("Message:", error.message);
// Handle provider-specific errors
if (error.provider === 'openai' && error.upstreamStatus === 429) {
// OpenAI rate limit
}
} else if (error instanceof ConfigurationError) {
console.error("Configuration error:", error.message);
// Check provider key is added in dashboard
} else if (error instanceof NetworkError) {
console.error("Network error:", error.message);
// Check internet connection, firewall, etc.
}
}
Link to section: Exponential BackoffExponential Backoff
Implement exponential backoff for transient errors:
async function callWithBackoff(fn: () => Promise<any>, maxRetries = 5) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
return await fn();
} catch (error) {
if (error instanceof RateLimitError ||
error instanceof NetworkError) {
const delay = Math.min(1000 * Math.pow(2, attempt), 30000);
console.log(`Retry attempt ${attempt + 1} after ${delay}ms`);
await new Promise(resolve => setTimeout(resolve, delay));
} else {
throw error;
}
}
}
throw new Error('Max retries exceeded');
}
// Usage
const response = await callWithBackoff(() =>
openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: userInput }]
})
);
Link to section: TypeScript SupportTypeScript Support
Full TypeScript support with comprehensive type definitions:
Link to section: Type DefinitionsType Definitions
import {
LockLLM,
LockLLMConfig,
ScanRequest,
ScanResponse,
ScanResult,
Sensitivity,
ProviderName,
RequestOptions
} from '@lockllm/sdk';
// Configuration types
const config: LockLLMConfig = {
apiKey: string,
baseURL?: string,
timeout?: number,
maxRetries?: number
};
// Scan request types
const request: ScanRequest = {
input: string,
sensitivity?: 'low' | 'medium' | 'high'
};
// Scan response types
const response: ScanResponse = {
safe: boolean,
label: 0 | 1,
confidence: number,
injection: number,
sensitivity: 'low' | 'medium' | 'high',
request_id: string,
usage: {
requests: number,
input_chars: number
},
debug?: {
duration_ms: number,
inference_ms: number,
mode: 'single' | 'chunked'
}
};
Link to section: Type InferenceType Inference
TypeScript automatically infers types:
const lockllm = new LockLLM({ apiKey: '...' });
// Return type automatically inferred as Promise<ScanResponse>
const result = await lockllm.scan({
input: 'test',
sensitivity: 'medium'
});
// TypeScript knows these properties exist
console.log(result.safe); // boolean
console.log(result.injection); // number
console.log(result.request_id); // string
Link to section: Generic TypesGeneric Types
Provider wrappers return correctly typed instances:
import { createOpenAI, createAnthropic } from '@lockllm/sdk/wrappers';
import type OpenAI from 'openai';
import type Anthropic from '@anthropic-ai/sdk';
// Returns OpenAI instance with full types
const openai: OpenAI = createOpenAI({ apiKey: '...' });
// Returns Anthropic instance with full types
const anthropic: Anthropic = createAnthropic({ apiKey: '...' });
// Full autocomplete for all methods
const response = await openai.chat.completions.create({
model: "gpt-4", // TypeScript validates model names
messages: [ // TypeScript validates message structure
{ role: "user", content: "Hello" }
]
});
Link to section: IDE AutocompleteIDE Autocomplete
TypeScript provides full IDE autocomplete:
const lockllm = new LockLLM({ apiKey: '...' });
// IDE suggests: scan(...)
await lockllm.s // <-- autocomplete suggestions appear
// IDE suggests: input, sensitivity
await lockllm.scan({ i // <-- autocomplete for properties
// IDE suggests: "low", "medium", "high"
await lockllm.scan({
input: "test",
sensitivity: "m // <-- autocomplete for sensitivity values
});
Link to section: API ReferenceAPI Reference
Link to section: ClassesClasses
Link to section: LockLLMLockLLM
Main client class for scanning prompts.
class LockLLM {
constructor(config: LockLLMConfig)
async scan(request: ScanRequest, options?: RequestOptions): Promise<ScanResponse>
}
Link to section: InterfacesInterfaces
Link to section: LockLLMConfigLockLLMConfig
interface LockLLMConfig {
apiKey: string // Your LockLLM API key (required)
baseURL?: string // Custom API endpoint (optional)
timeout?: number // Request timeout in ms (default: 60000)
maxRetries?: number // Max retry attempts (default: 3)
}
Link to section: ScanRequestScanRequest
interface ScanRequest {
input: string // Text to scan (required)
sensitivity?: 'low' | 'medium' | 'high' // Detection level (default: 'medium')
}
Link to section: ScanResponseScanResponse
interface ScanResponse {
safe: boolean // true if safe, false if malicious
label: 0 | 1 // 0=safe, 1=malicious
confidence: number // Confidence score 0-100
injection: number // Injection risk score 0-100
sensitivity: Sensitivity // Sensitivity level used
request_id: string // Unique request identifier
usage: {
requests: number // Inference requests used
input_chars: number // Characters processed
}
debug?: { // Pro plan only
duration_ms: number // Total processing time
inference_ms: number // ML inference time
mode: 'single' | 'chunked'
}
}
Link to section: FunctionsFunctions
Link to section: Wrapper FunctionsWrapper Functions
function createOpenAI(config: GenericClientConfig): OpenAI
function createAnthropic(config: GenericClientConfig): Anthropic
function createGroq(config: GenericClientConfig): OpenAI
function createDeepSeek(config: GenericClientConfig): OpenAI
function createPerplexity(config: GenericClientConfig): OpenAI
function createMistral(config: GenericClientConfig): OpenAI
function createOpenRouter(config: GenericClientConfig): OpenAI
function createTogether(config: GenericClientConfig): OpenAI
function createXAI(config: GenericClientConfig): OpenAI
function createFireworks(config: GenericClientConfig): OpenAI
function createAnyscale(config: GenericClientConfig): OpenAI
function createHuggingFace(config: GenericClientConfig): OpenAI
function createGemini(config: GenericClientConfig): OpenAI
function createCohere(config: GenericClientConfig): Cohere
function createAzure(config: GenericClientConfig): OpenAI
function createBedrock(config: GenericClientConfig): OpenAI
function createVertexAI(config: GenericClientConfig): OpenAI
Link to section: Utility FunctionsUtility Functions
function getProxyURL(provider: ProviderName): string
function getAllProxyURLs(): Record<ProviderName, string>
Link to section: Error ClassesError Classes
class LockLLMError extends Error {
type: string
code?: string
status?: number
requestId?: string
}
class AuthenticationError extends LockLLMError {}
class RateLimitError extends LockLLMError {
retryAfter?: number
}
class PromptInjectionError extends LockLLMError {
scanResult: ScanResult
}
class UpstreamError extends LockLLMError {
provider?: string
upstreamStatus?: number
}
class ConfigurationError extends LockLLMError {}
class NetworkError extends LockLLMError {}
Link to section: Best PracticesBest Practices
Link to section: SecuritySecurity
- Never hardcode API keys - Use environment variables or secure vaults
- Log security incidents - Track blocked requests with request IDs
- Set appropriate sensitivity - Balance security needs with false positives
- Handle errors gracefully - Provide user-friendly error messages
- Monitor request patterns - Watch for attack trends in dashboard
- Rotate keys regularly - Update API keys periodically
- Use HTTPS only - Never send API keys over unencrypted connections
Link to section: PerformancePerformance
- Use wrapper functions - Most efficient integration method
- Implement caching - Cache LLM responses when appropriate
- Set reasonable timeouts - Balance user experience with reliability
- Monitor latency - Track P50, P95, P99 in production
- Use connection pooling - SDK handles this automatically
- Batch when possible - Group similar requests to reduce overhead
Link to section: Production DeploymentProduction Deployment
- Test sensitivity levels - Validate with real user data
- Implement monitoring - Track blocked requests and false positives
- Set up alerting - Get notified of security incidents
- Review logs regularly - Analyze attack patterns
- Keep SDK updated - Benefit from latest improvements
- Document incidents - Maintain security incident log
- Load test - Verify performance under expected load
Link to section: Migration GuidesMigration Guides
Link to section: From Direct API IntegrationFrom Direct API Integration
If you're currently calling LLM APIs directly:
// Before: Direct OpenAI API call
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
});
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: userInput }]
});
// After: With LockLLM security (one line change)
import { createOpenAI } from '@lockllm/sdk/wrappers';
const openai = createOpenAI({
apiKey: process.env.LOCKLLM_API_KEY // Only change
});
// Everything else stays the same
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: userInput }]
});
Link to section: From OpenAI SDKFrom OpenAI SDK
Minimal changes required:
// Before
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
});
// After
import { createOpenAI } from '@lockllm/sdk/wrappers';
const openai = createOpenAI({
apiKey: process.env.LOCKLLM_API_KEY // Use LockLLM key
});
// All other code remains unchanged
Link to section: From Anthropic SDKFrom Anthropic SDK
Minimal changes required:
// Before
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY
});
// After
import { createAnthropic } from '@lockllm/sdk/wrappers';
const anthropic = createAnthropic({
apiKey: process.env.LOCKLLM_API_KEY // Use LockLLM key
});
// All other code remains unchanged
Link to section: TroubleshootingTroubleshooting
Link to section: Common IssuesCommon Issues
"Invalid API key" error (401)
- Verify your LockLLM API key is correct
- Check the key hasn't been revoked in the dashboard
- Ensure you're using your LockLLM key, not your provider key
"No provider API key configured" error (400)
- Add your provider API key (OpenAI, Anthropic, etc.) in the dashboard
- Navigate to Proxy Settings and configure provider keys
- Ensure the provider key is enabled (toggle switch on)
"Could not extract prompt from request" error (400)
- Verify request body format matches provider API spec
- Check you're using the correct SDK version
- Ensure messages array is properly formatted
High latency
- Check your network connection
- Verify LockLLM API status at status.lockllm.com
- Consider adjusting timeout settings
- Review provider API latency
TypeScript errors
- Ensure TypeScript 4.5+ is installed
- Check peer dependencies are installed (openai, @anthropic-ai/sdk)
- Run
npm install --save-dev @types/node - Verify tsconfig.json includes proper settings
Link to section: Debugging TipsDebugging Tips
Enable verbose logging:
const lockllm = new LockLLM({
apiKey: process.env.LOCKLLM_API_KEY
});
try {
const result = await lockllm.scan({
input: userPrompt
});
console.log('Scan result:', {
safe: result.safe,
injection: result.injection,
confidence: result.confidence,
request_id: result.request_id
});
} catch (error) {
console.error('Error details:', {
type: error.type,
code: error.code,
message: error.message,
requestId: error.requestId,
status: error.status
});
}
Link to section: Getting HelpGetting Help
- Documentation: https://www.lockllm.com/docs
- GitHub Issues: https://github.com/lockllm/lockllm-npm/issues
- Email Support: [email protected]
- Status Page: status.lockllm.com
Link to section: FAQFAQ
Link to section: How do I install the SDK?How do I install the SDK?
Install using npm, yarn, pnpm, or bun:
npm install @lockllm/sdk
yarn add @lockllm/sdk
pnpm add @lockllm/sdk
bun add @lockllm/sdk
The SDK requires Node.js 14+ and works in Node.js, browsers, and Edge runtimes.
Link to section: Does the SDK work as a drop-in replacement for OpenAI and Anthropic?Does the SDK work as a drop-in replacement for OpenAI and Anthropic?
Yes. Use createOpenAI() or createAnthropic() to get wrapped clients that work exactly like the official SDKs. All methods, streaming, and function calling are supported. Prompts are automatically scanned before being sent to the provider.
Link to section: What TypeScript support is available?What TypeScript support is available?
The SDK is TypeScript-first with comprehensive type definitions. It includes full type safety, generics, type inference, and IDE autocomplete for all APIs. TypeScript 4.5+ is supported.
Link to section: Which AI providers are supported?Which AI providers are supported?
17+ providers are supported: OpenAI, Anthropic, Groq, DeepSeek, Perplexity, Mistral, OpenRouter, Together AI, xAI (Grok), Fireworks AI, Anyscale, Hugging Face, Google Gemini, Cohere, Azure OpenAI, AWS Bedrock, and Google Vertex AI. All providers support custom endpoint URLs for self-hosted and private deployments.
Link to section: How do I handle errors?How do I handle errors?
The SDK provides 7 typed error classes: AuthenticationError, RateLimitError, PromptInjectionError, UpstreamError, ConfigurationError, NetworkError, and base LockLLMError. Use try-catch blocks with instanceof checks for proper error handling.
Link to section: Does the SDK support streaming?Does the SDK support streaming?
Yes. All provider wrappers fully support streaming responses. Use stream: true in your requests and iterate with for await...of loops.
Link to section: Is it really free?Is it really free?
Yes. LockLLM is completely free with unlimited usage and no rate limits for the free tier (1,000 requests/minute). You bring your own provider API keys (BYOK model), so you only pay your provider (OpenAI, Anthropic, etc.) for LLM usage.
Link to section: GitHub RepositoryGitHub Repository
View the source code, report issues, and contribute:
Repository: https://github.com/lockllm/lockllm-npm
NPM Package: https://www.npmjs.com/package/@lockllm/sdk
Link to section: Next StepsNext Steps
- View JavaScript/TypeScript SDK integration page for more examples
- Read API Reference for REST API details
- Explore Best Practices for production deployments
- Check out Proxy Mode for alternative integration
- Configure Webhooks for security alerts
- Browse Dashboard documentation