cogvideox-2b-endpoint

CogVideoX-2b with custom handler for Inference Endpoints

Usage

This repository contains a custom inference handler for HuggingFace Inference Endpoints.

Deploy

  1. Go to HuggingFace Inference Endpoints
  2. Click "New Endpoint"
  3. Select this repository: ainativestudio/cogvideox-2b-endpoint
  4. Choose GPU instance (NVIDIA L4 or A100)
  5. Deploy

API Usage

import requests
import base64

response = requests.post(
    "YOUR_ENDPOINT_URL",
    headers={"Authorization": "Bearer YOUR_TOKEN"},
    json={
        "inputs": "A beautiful sunset over the ocean",
        "num_inference_steps": 50,
        "guidance_scale": 6.0
    }
)

video_base64 = response.json()["video"]
video_bytes = base64.b64decode(video_base64)

with open("output.mp4", "wb") as f:
    f.write(video_bytes)

Custom Handler

The handler.py file implements the EndpointHandler class required for custom inference. See the HuggingFace documentation for more details.

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support