Anyscale

Overview

Protect your Anyscale Endpoints integration with LockLLM's transparent proxy layer. Secure Ray-powered LLM serving infrastructure with automatic prompt injection scanning.

Quick Start

const client = new OpenAI({
  baseURL: 'https://api.lockllm.com/v1/proxy/anyscale',
  apiKey: process.env.LOCKLLM_API_KEY // Your LockLLM API key
})

const response = await client.chat.completions.create({
  model: 'meta-llama/Meta-Llama-3-70B-Instruct',
  messages: [{ role: 'user', content: userInput }]
})

Features

  • Ray Infrastructure: Compatible with Ray-powered serving
  • Scalable Deployment: Maintains Anyscale's scalability
  • Model Flexibility: Works with all Anyscale-hosted models
  • Automatic Scanning: Real-time security protection

Getting Started

Generate API keys in the dashboard and visit the documentation.