JavaScript/TypeScript SDK
Overview
The LockLLM JavaScript/TypeScript SDK provides native integration for prompt injection protection. With drop-in replacements for 17+ AI providers (custom endpoint support for each), full TypeScript support, and zero external dependencies, the SDK makes it easy to add security to JavaScript applications without changing existing code.
Installation
Install using your preferred package manager:
# npm
npm install @lockllm/sdk
# yarn
yarn add @lockllm/sdk
# pnpm
pnpm add @lockllm/sdk
# bun
bun add @lockllm/sdkQuick Start
Replace your SDK initialization with LockLLM's wrapper:
import { createOpenAI } from '@lockllm/sdk/wrappers';
// Before:
// const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
// After (one line change):
const openai = createOpenAI({
apiKey: process.env.LOCKLLM_API_KEY
});
// Everything else works the same
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: userInput }]
});
console.log(response.choices[0].message.content);Provider Wrappers
Pre-built wrappers for 17+ providers work as drop-in replacements with custom endpoint support:
import {
createOpenAI, // OpenAI GPT models
createAnthropic, // Anthropic Claude
createGroq, // Groq LPU inference
createDeepSeek, // DeepSeek models
createPerplexity, // Perplexity (with internet)
createMistral, // Mistral AI
createOpenRouter, // OpenRouter (multi-provider)
createTogether, // Together AI
createXAI, // xAI Grok
createFireworks, // Fireworks AI
createAnyscale, // Anyscale Endpoints
createHuggingFace, // Hugging Face Inference
createGemini, // Google Gemini
createCohere, // Cohere
createAzure, // Azure OpenAI
createBedrock, // AWS Bedrock
createVertexAI // Google Vertex AI
} from '@lockllm/sdk/wrappers';Anthropic Example
import { createAnthropic } from '@lockllm/sdk/wrappers';
const anthropic = createAnthropic({
apiKey: process.env.LOCKLLM_API_KEY
});
const message = await anthropic.messages.create({
model: "claude-opus-4-5",
max_tokens: 1024,
messages: [{ role: "user", content: userInput }]
});
console.log(message.content[0].text);Streaming Support
All providers support streaming responses:
const stream = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: "Write a story" }],
stream: true
});
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content;
if (content) {
process.stdout.write(content);
}
}Express.js Integration
Protect Express endpoints from prompt injection:
import express from 'express';
import { LockLLM } from '@lockllm/sdk';
const app = express();
const lockllm = new LockLLM({ apiKey: process.env.LOCKLLM_API_KEY });
app.use(async (req, res, next) => {
if (req.body?.prompt) {
const result = await lockllm.scan({
input: req.body.prompt,
sensitivity: "medium"
});
if (!result.safe) {
return res.status(400).json({
error: "Malicious input detected",
request_id: result.request_id
});
}
}
next();
});Next.js App Router
Use with Next.js server actions:
// app/actions.ts
'use server';
import { createOpenAI } from '@lockllm/sdk/wrappers';
const openai = createOpenAI({
apiKey: process.env.LOCKLLM_API_KEY!
});
export async function chatAction(message: string) {
try {
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: message }]
});
return { success: true, message: response.choices[0].message.content };
} catch (error) {
if (error.code === 'prompt_injection_detected') {
return { success: false, error: "Invalid input" };
}
throw error;
}
}Error Handling
Comprehensive typed error handling:
import {
PromptInjectionError,
AuthenticationError,
RateLimitError
} from '@lockllm/sdk';
try {
const response = await openai.chat.completions.create({...});
} catch (error) {
if (error instanceof PromptInjectionError) {
console.log("Malicious input blocked");
console.log("Injection score:", error.scanResult.injection);
console.log("Request ID:", error.requestId);
} else if (error instanceof AuthenticationError) {
console.log("Invalid API key");
} else if (error instanceof RateLimitError) {
console.log("Rate limit exceeded");
console.log("Retry after (ms):", error.retryAfter);
}
}Configuration
Adjust security sensitivity per use case:
import { LockLLM } from '@lockllm/sdk';
const lockllm = new LockLLM({
apiKey: process.env.LOCKLLM_API_KEY
});
// High sensitivity for critical systems
const highResult = await lockllm.scan({
input: userPrompt,
sensitivity: "high" // "low" | "medium" | "high"
});TypeScript Support
Full type safety with comprehensive type definitions:
import {
LockLLM,
LockLLMConfig,
ScanResponse,
ProviderName
} from '@lockllm/sdk';
// Type inference works automatically
const config: LockLLMConfig = {
apiKey: '...',
timeout: 30000
};
const result: ScanResponse = await lockllm.scan({
input: 'test',
sensitivity: 'medium'
});
// TypeScript validates all properties
console.log(result.safe); // boolean
console.log(result.injection); // number
console.log(result.request_id); // stringSupported Providers
The SDK provides drop-in wrappers for 17+ providers with custom endpoint support:
- OpenAI - GPT models and compatible APIs
- Anthropic - Claude models
- Groq - Llama and Mixtral models
- DeepSeek - DeepSeek models
- Perplexity - Search-augmented models
- Mistral AI - Mistral models
- Google Gemini - Gemini models
- Cohere - Command and Embed models
- Azure OpenAI - Azure-hosted models
- OpenRouter - Multi-provider gateway
- Together AI - Open-source models
- xAI - Grok models
- Fireworks AI - Serverless inference
- Anyscale - Ray-powered serving
- Hugging Face - Inference API
- AWS Bedrock - Foundation models
- Google Vertex AI - Vertex AI models
All providers support custom endpoint URLs for self-hosted deployments, Azure resources, and private clouds. Configure custom endpoints in the dashboard.
FAQ
How do I install the JavaScript/TypeScript SDK?
Install via npm with npm install @lockllm/sdk, yarn with yarn add @lockllm/sdk, pnpm with pnpm add @lockllm/sdk, or bun with bun add @lockllm/sdk. The SDK requires Node.js 14+ and works in both Node.js and browser environments.
Does the SDK work as a drop-in replacement for OpenAI and Anthropic?
Yes. Use createOpenAI() or createAnthropic() to get wrapped clients that work exactly like the official SDKs. All methods, streaming, and function calling are supported.
What TypeScript support is available?
The SDK is TypeScript-first with comprehensive type definitions. It includes full type safety, generics, type inference, and IDE autocomplete for all APIs.
Which AI providers are supported?
17+ providers are supported: OpenAI, Anthropic, Groq, DeepSeek, Perplexity, Mistral, OpenRouter, Together AI, xAI, Fireworks AI, Anyscale, Hugging Face, Google Gemini, Cohere, Azure OpenAI, AWS Bedrock, and Google Vertex AI. All providers support custom endpoint URLs for self-hosted and private deployments.
Does the SDK support streaming responses?
Yes. All provider wrappers fully support streaming responses. Use stream: true in your requests and iterate with for await...of loops.
Is the SDK really free?
Yes. LockLLM is completely free with unlimited usage. You only pay your chosen LLM provider for API costs. The SDK itself has zero cost.