Fireworks AI

Overview

Protect your Fireworks AI integration with LockLLM's transparent proxy layer. Secure serverless model deployments with automatic prompt injection scanning.

Quick Start

const client = new OpenAI({
  baseURL: 'https://api.lockllm.com/v1/proxy/fireworks',
  apiKey: process.env.LOCKLLM_API_KEY // Your LockLLM API key
})

const response = await client.chat.completions.create({
  model: 'accounts/fireworks/models/llama-v3-70b-instruct',
  messages: [{ role: 'user', content: userInput }]
})

Features

  • Serverless Platform: Compatible with Fireworks serverless infrastructure
  • Model Variety: Access to multiple open-source models
  • Fast Deployment: Maintain Fireworks performance
  • Automatic Scanning: Real-time security protection

Getting Started

Generate API keys in the dashboard and visit the documentation.