Python SDK
Overview
The LockLLM Python SDK provides native Python integration for prompt injection protection. With both synchronous and asynchronous APIs, drop-in replacements for 17+ major LLM providers (custom endpoint support for each), and full type hints, the SDK makes it easy to add security to Python applications without changing existing code.
Installation
Install the SDK using your preferred Python package manager:
# pip
pip install lockllm
# pip3
pip3 install lockllm
# poetry
poetry add lockllm
# pipenv
pipenv install lockllmQuick Start (Synchronous)
Basic usage with the synchronous client:
from lockllm import LockLLM
# Initialize the client
client = LockLLM(api_key="your-lockllm-api-key")
# Scan a prompt
user_input = "Tell me a joke"
result = client.scan(user_input)
if result.safe:
print("Prompt is safe to use")
# Proceed with your LLM call
else:
print(f"Threat detected: {result.detections}")
# Handle malicious promptQuick Start (Asynchronous)
Use async/await for non-blocking operations:
from lockllm import AsyncLockLLM
import asyncio
async def scan_prompt():
async with AsyncLockLLM(api_key="your-lockllm-api-key") as client:
result = await client.scan("Tell me a joke")
return result.safe
# Run async function
is_safe = asyncio.run(scan_prompt())Provider Wrappers
Drop-in replacements for popular LLM providers. Your existing code works without changes:
OpenAI (Sync and Async)
from lockllm import create_openai, create_async_openai
# Synchronous
openai = create_openai(
api_key="your-lockllm-api-key",
openai_api_key="your-openai-key"
)
response = openai.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": user_input}]
)
# Asynchronous
async def chat():
openai = create_async_openai(
api_key="your-lockllm-api-key",
openai_api_key="your-openai-key"
)
response = await openai.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": user_input}]
)
return response
# Streaming support (sync and async)
stream = openai.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": user_input}],
stream=True
)
for chunk in stream:
print(chunk.choices[0].delta.content)Anthropic Claude (Sync and Async)
from lockllm import create_anthropic, create_async_anthropic
# Synchronous
anthropic = create_anthropic(
api_key="your-lockllm-api-key",
anthropic_api_key="your-anthropic-key"
)
message = anthropic.messages.create(
model="claude-opus-4-5",
max_tokens=1024,
messages=[{"role": "user", "content": user_input}]
)
# Asynchronous
async def chat():
anthropic = create_async_anthropic(
api_key="your-lockllm-api-key",
anthropic_api_key="your-anthropic-key"
)
message = await anthropic.messages.create(
model="claude-sonnet-4-5",
max_tokens=1024,
messages=[{"role": "user", "content": user_input}]
)
return messageOther Providers
The SDK supports 17+ providers with both sync and async variants:
from lockllm import (
create_groq, create_async_groq,
create_deepseek, create_async_deepseek,
create_perplexity, create_async_perplexity,
create_mistral, create_async_mistral,
# ... and 14 more providers
)
# All providers follow the same pattern
groq = create_groq(
api_key="your-lockllm-api-key",
groq_api_key="your-groq-key"
)
response = groq.chat.completions.create(
model="llama-3.3-70b-versatile",
messages=[{"role": "user", "content": user_input}]
)Configuration
Customize sensitivity levels and behavior:
from lockllm import LockLLM, LockLLMConfig
config = LockLLMConfig(
sensitivity="high", # "low", "medium", or "high"
base_url="https://api.lockllm.com", # Custom endpoint
timeout=30 # Request timeout in seconds
)
client = LockLLM(
api_key="your-lockllm-api-key",
config=config
)FastAPI Integration
Protect FastAPI endpoints from prompt injection:
from fastapi import FastAPI, HTTPException
from lockllm import AsyncLockLLM
from pydantic import BaseModel
app = FastAPI()
lockllm = AsyncLockLLM(api_key="your-lockllm-api-key")
class ChatRequest(BaseModel):
message: str
@app.post("/chat")
async def chat(request: ChatRequest):
# Scan prompt before processing
result = await lockllm.scan(request.message)
if not result.safe:
raise HTTPException(
status_code=400,
detail=f"Malicious prompt detected: {result.detections}"
)
# Safe to proceed with LLM call
response = await generate_llm_response(request.message)
return {"response": response}Batch Processing with asyncio
Scan multiple prompts concurrently:
import asyncio
from lockllm import AsyncLockLLM
async def scan_multiple_prompts():
async with AsyncLockLLM(api_key="your-lockllm-api-key") as client:
prompts = [
"Tell me a joke",
"What's the weather?",
"How do I hack a system?"
]
# Scan all prompts concurrently
results = await asyncio.gather(
*[client.scan(prompt) for prompt in prompts]
)
for prompt, result in zip(prompts, results):
print(f"{prompt}: {'Safe' if result.safe else 'Malicious'}")
asyncio.run(scan_multiple_prompts())Error Handling
The SDK provides detailed exception types:
from lockllm import (
LockLLM,
AuthenticationError,
RateLimitError,
PromptInjectionError,
NetworkError
)
try:
client = LockLLM(api_key="your-lockllm-api-key")
result = client.scan(user_input)
except AuthenticationError:
print("Invalid API key")
except RateLimitError as e:
print(f"Rate limit exceeded. Retry after {e.retry_after}s")
except PromptInjectionError as e:
print(f"Malicious prompt blocked: {e.detections}")
except NetworkError:
print("Network error - check connectivity")
except Exception as e:
print(f"Unexpected error: {e}")Type Hints
The SDK includes complete type annotations for IDE support and mypy:
from lockllm import LockLLM, ScanResult
from typing import List
def process_prompts(
client: LockLLM,
prompts: List[str]
) -> List[ScanResult]:
results: List[ScanResult] = []
for prompt in prompts:
result = client.scan(prompt)
results.append(result)
return results
# Full type safety with mypy
client: LockLLM = LockLLM(api_key="your-key")
results: List[ScanResult] = process_prompts(client, ["test"])Supported Providers
The SDK provides drop-in wrappers for 17+ providers with custom endpoint support:
- OpenAI - GPT-5.2, GPT-5, GPT-4
- Anthropic - Claude Opus 4.5, Sonnet 4.5
- Groq - Llama, Mixtral models
- DeepSeek - DeepSeek V3.2
- Perplexity - Search-augmented models
- Mistral AI - Mistral 3, Large 3
- Google Gemini - Gemini 3 Pro, Flash
- Cohere - Command, Embed models
- Azure OpenAI - Azure-hosted GPT models
- OpenRouter - Multi-provider gateway
- Together AI - Open-source models
- xAI - Grok 4
- Fireworks AI - Serverless inference
- Anyscale - Ray-powered serving
- Hugging Face - Inference API
- AWS Bedrock - Foundation models
- Google Vertex AI - PaLM, Gemini on GCP
Custom Endpoint Support: All providers support custom endpoint URLs for self-hosted models, Azure resources, and private deployments. Configure your custom endpoint in the dashboard and the SDK automatically routes requests to it.
FAQ
How do I install the Python SDK?
Install via pip with pip install lockllm, pip3 with pip3 install lockllm, poetry with poetry add lockllm, or pipenv with pipenv install lockllm. The SDK requires Python 3.8+ and works with both synchronous and asynchronous code.
Does the SDK work as a drop-in replacement for OpenAI and Anthropic?
Yes. Use create_openai() or create_anthropic() for sync clients, and create_async_openai() or create_async_anthropic() for async clients. They work exactly like the official libraries with all methods, streaming, and function calling supported.
Does the Python SDK support type hints?
Yes. The SDK includes complete type annotations with a py.typed marker for full mypy and IDE support. All classes, functions, and return types are fully typed.
How many AI providers are supported?
The SDK supports 17+ providers including OpenAI, Anthropic, Groq, DeepSeek, Perplexity, Mistral, Google Gemini, and more. All providers support custom endpoint URLs for self-hosted and private deployments. Each provider has both synchronous and asynchronous factory functions.
Does the SDK support async/await?
Yes. The SDK provides full async/await support with AsyncLockLLM client and async provider factories like create_async_openai(). Use async context managers for proper resource cleanup.
Is the Python SDK free to use?
Yes. The Python SDK is completely free and open source. You only pay for API costs from your chosen LLM provider. LockLLM scanning is free with unlimited requests.