Groq

Overview

Protect your Groq integration with LockLLM's transparent proxy layer. Route all Groq LPU requests through LockLLM for automatic prompt injection scanning without modifying your application logic. Compatible with Llama, Mixtral, and all models hosted on Groq's ultra-fast Language Processing Unit infrastructure.

How it Works

Update your Groq client's base URL to route through the LockLLM proxy. All requests are intercepted, prompts are scanned for security threats in real-time, and safe requests are forwarded to Groq. The proxy maintains Groq's legendary inference speed while adding security protection.

Quick Start

const client = new Groq({
  baseURL: 'https://api.lockllm.com/v1/proxy/groq',
  apiKey: process.env.LOCKLLM_API_KEY // Your LockLLM API key
})

// All requests are automatically scanned
const response = await client.chat.completions.create({
  model: 'llama-3.3-70b',
  messages: [{ role: 'user', content: userInput }]
})

Features

  • Automatic Scanning: All prompts scanned without code changes
  • LPU Optimization: Maintains Groq's ultra-fast inference speeds
  • Model Support: Works with Llama, Mixtral, and all Groq models
  • Streaming Support: Compatible with streaming responses
  • Low Latency: Minimal overhead on already fast inference

Getting Started

Generate API keys in the dashboard and visit the documentation for complete setup guides.