REST API

Overview

The LockLLM REST API provides language-agnostic access to prompt security scanning. Send prompts via HTTP POST and receive structured threat analysis with confidence scores, risk classification, and detailed detection results.

Designed for low-latency integration into production systems, the API supports configurable sensitivity levels, automatic chunking for large documents, and webhook notifications for security events.

How it Works

Make authenticated POST requests to the /v1/scan endpoint with your prompt text and optional parameters. The API returns a structured JSON response indicating whether the prompt is safe, along with detailed threat detection data including injection patterns, jailbreak attempts, and confidence scores.

Responses include request IDs for audit logging and webhook correlation. Failed requests return clear error messages with retry guidance.

Key Capabilities

  • Sub-100ms p95 latency for most requests
  • Configurable sensitivity: low, medium, or high detection thresholds
  • Automatic text chunking for documents up to 100,000 characters
  • Structured JSON responses with confidence scores and threat categorization
  • Rate limiting with automatic backoff headers
  • Request ID tracking for audit and debugging

Authentication

API requests require a Bearer token passed in the Authorization header. Generate API keys in the dashboard.

Use Cases

  • Backend Validation: Scan prompts before sending to OpenAI, Anthropic, or other LLM providers
  • User Input Sanitization: Validate user inputs in chatbots and AI agents
  • RAG Pipeline Security: Scan retrieved documents for injection attacks before feeding to LLMs
  • Content Moderation: Real-time detection of malicious prompts in production systems
  • Security Audit: Log and track security events for compliance

Getting Started

Generate an API key in the dashboard and start scanning. View complete API reference for endpoints, request schemas, response formats, and integration examples in multiple languages.