Google Gemini

Overview

Protect your Google Gemini API integration with LockLLM's transparent proxy layer. Route all Gemini requests through LockLLM for automatic prompt injection scanning without modifying your application logic. Compatible with Gemini 3 Pro, Gemini 3 Flash, and all Google AI model variants.

The integration maintains Gemini's complete API interface, including multimodal inputs, streaming responses, function calling, and grounding. Your existing code works exactly as before, with added security protection.

How it Works

Update your Google AI client's base URL to route through the LockLLM proxy. All requests are intercepted, prompts are scanned for security threats in real-time, and safe requests are forwarded to Gemini. Malicious prompts are blocked before reaching the model.

The proxy adds 150-250ms overhead and operates transparently. All standard Gemini features including multimodal inputs, streaming, function calling, and grounding work without modification.

Supported Models

All Google Gemini models are supported through the proxy:

  • Gemini 3 Pro (highest capability)
  • Gemini 3 Flash (optimized speed)
  • Gemini 2 Pro and Flash
  • All legacy Gemini 1.5 variants
  • Multimodal models with vision and audio
  • Fine-tuned custom models

Quick Start

Update your Google AI client configuration to route through the proxy:

import { GoogleGenerativeAI } from '@google/generative-ai'

const client = new GoogleGenerativeAI(
  process.env.GOOGLE_API_KEY,
  {
    baseUrl: 'https://api.lockllm.com/v1/proxy/gemini',
    additionalHeaders: {
      'X-LockLLM-Key': process.env.LOCKLLM_API_KEY
    }
  }
)

// All requests are automatically scanned
const model = client.getGenerativeModel({ model: 'gemini-3-pro' })
const response = await model.generateContent(userInput)

Python Example

import google.generativeai as genai

genai.configure(
    api_key=os.environ["GOOGLE_API_KEY"],
    transport="rest",
    client_options={
        "api_endpoint": "https://api.lockllm.com/v1/proxy/gemini"
    },
    default_metadata=[
        ("x-lockllm-key", os.environ["LOCKLLM_API_KEY"])
    ]
)

# Automatic security scanning
model = genai.GenerativeModel('gemini-3-pro')
response = model.generate_content(user_input)

Features

  • Automatic Scanning: All prompts scanned without code changes
  • Full Model Support: Works with all Gemini models and variants
  • Multimodal Support: Compatible with text, image, and audio inputs
  • Streaming Support: Works with streaming responses
  • Function Calling: Preserves function calling capabilities
  • Grounding: Compatible with Google Search grounding

Configuration

Configure security behavior using request headers:

  • X-LockLLM-Key: Your LockLLM API key (required)

Getting Started

Generate API keys in the dashboard, update your client configuration with the proxy URL, and start making secure requests. Visit the documentation for complete setup guides, examples, and best practices.