Hugging Face

Overview

Protect your Hugging Face Inference API integration with LockLLM's transparent proxy layer. Secure access to thousands of open-source models with automatic prompt injection scanning.

Quick Start

const client = new HfInference(process.env.LOCKLLM_API_KEY, { // Your LockLLM API key
  baseURL: 'https://api.lockllm.com/v1/proxy/huggingface'
})

const response = await client.textGeneration({
  model: 'meta-llama/Meta-Llama-3-70B-Instruct',
  inputs: userInput
})

Features

  • Thousands of Models: Access to entire Hugging Face model hub
  • Open-Source Focus: Compatible with community models
  • Flexible Deployment: Works with Inference API and Endpoints
  • Automatic Scanning: Real-time security protection

Getting Started

Generate API keys in the dashboard and visit the documentation.