Quick Start Guide
Get your Hugging Face model service running in 5 minutes.
π Quick Setup
Step 1: Add Dependencies
Add your service dependencies to scripts/install/core-service-install.sh:
# Example: Gradio chat interface
pip install --no-cache-dir torch transformers gradio
Step 2: Implement Your Service
Edit scripts/start/core-service-start.sh:
#!/bin/bash
set -e
source /opt/venv/bin/activate
# Simple Gradio example
python -c "
import gradio as gr
from transformers import pipeline
# Load model
chatbot = pipeline('conversational', model='microsoft/DialoGPT-medium')
def chat(message, history):
response = chatbot(message)
return response[0]['generated_text']
# Launch interface
gr.ChatInterface(chat).launch(server_name='0.0.0.0', server_port=7860)
"
Step 3: Build and Run
# Build
docker build -t my-hf-service .
# Run
docker run -p 7860:7860 my-hf-service
Step 4: Access Your Service
Open http://localhost:7860 in your browser.
π― Common Frameworks
Gradio (Recommended for beginners)
# Dependencies
pip install gradio transformers
# Simple implementation
import gradio as gr
def predict(text): return f"Echo: {text}"
gr.Interface(predict, "text", "text").launch(server_name="0.0.0.0", server_port=7860)
FastAPI (For REST APIs)
# Dependencies
pip install fastapi uvicorn
# Simple implementation
from fastapi import FastAPI
app = FastAPI()
@app.get("/predict")
def predict(text: str): return {"result": f"Processed: {text}"}
# Run with: uvicorn app:app --host 0.0.0.0 --port 7860
Streamlit (For dashboards)
# Dependencies
pip install streamlit
# Simple implementation
import streamlit as st
st.title("My Dashboard")
text = st.text_input("Enter text")
if text: st.write(f"Result: {text}")
# Run with: streamlit run app.py --server.port 7860 --server.address 0.0.0.0
π§ Configuration
Environment Variables
# Core settings
SERVICE_PORT=7860 # Port (default: 7860)
MODEL_NAME=microsoft/DialoGPT-medium # HuggingFace model
SERVICE_TITLE="My AI Service" # Interface title
HF_TOKEN=your_token # For private models
# Optional services
FILEBROWSER_ENABLED=true # File management
PERSISTENCE_ENABLED=true # Data backup
DATASET_ID=username/dataset # Backup dataset
Popular Models
# Chat models
MODEL_NAME=microsoft/DialoGPT-medium
MODEL_NAME=facebook/blenderbot-400M-distill
# Text analysis
MODEL_NAME=cardiffnlp/twitter-roberta-base-sentiment-latest
MODEL_NAME=sentence-transformers/all-MiniLM-L6-v2
π Deploy to Hugging Face Spaces
- Create Space: Go to HuggingFace Spaces β "Create new Space" β Choose "Docker"
- Upload Code: Push your repository to the Space
- Set Variables: Add environment variables in Space settings:
MODEL_NAME=your-model SERVICE_TITLE=Your Service Update README: Add this header to your Space's README.md: ```yaml
title: Your Service Name emoji: π€ sdk: docker app_port: 7860
π οΈ Optional Services
| Service | Enable | Purpose |
|---|---|---|
| File Browser | FILEBROWSER_ENABLED=true |
Web file management (admin/admin) |
| Data Persistence | PERSISTENCE_ENABLED=true |
Auto backup to HF Datasets |
| Cloudflare Tunnel | CLOUDFLARED_ENABLED=true |
Secure external access |
Required for persistence:
HF_TOKEN=your_token
DATASET_ID=username/dataset
π Troubleshooting
| Problem | Solution |
|---|---|
| Service won't start | Check logs: docker logs container_name |
| Port already in use | Use different port: -p 7861:7860 |
| Model not found | Verify model name on HuggingFace |
| Out of memory | Use smaller model or increase Docker memory |
π Next Steps
- Read Documentation: Development Guide β’ Usage Guide
- Customize: Edit
scripts/start/core-service-start.shfor your logic - Deploy: Push to HuggingFace Spaces for public access
Happy building! π