52c75d7a / QUICKSTART.md
autoface's picture
first commit
9b205e1

Quick Start Guide

Get your Hugging Face model service running in 5 minutes.

πŸš€ Quick Setup

Step 1: Add Dependencies

Add your service dependencies to scripts/install/core-service-install.sh:

# Example: Gradio chat interface
pip install --no-cache-dir torch transformers gradio

Step 2: Implement Your Service

Edit scripts/start/core-service-start.sh:

#!/bin/bash
set -e
source /opt/venv/bin/activate

# Simple Gradio example
python -c "
import gradio as gr
from transformers import pipeline

# Load model
chatbot = pipeline('conversational', model='microsoft/DialoGPT-medium')

def chat(message, history):
    response = chatbot(message)
    return response[0]['generated_text']

# Launch interface
gr.ChatInterface(chat).launch(server_name='0.0.0.0', server_port=7860)
"

Step 3: Build and Run

# Build
docker build -t my-hf-service .

# Run
docker run -p 7860:7860 my-hf-service

Step 4: Access Your Service

Open http://localhost:7860 in your browser.

🎯 Common Frameworks

Gradio (Recommended for beginners)

# Dependencies
pip install gradio transformers

# Simple implementation
import gradio as gr
def predict(text): return f"Echo: {text}"
gr.Interface(predict, "text", "text").launch(server_name="0.0.0.0", server_port=7860)

FastAPI (For REST APIs)

# Dependencies
pip install fastapi uvicorn

# Simple implementation
from fastapi import FastAPI
app = FastAPI()
@app.get("/predict")
def predict(text: str): return {"result": f"Processed: {text}"}
# Run with: uvicorn app:app --host 0.0.0.0 --port 7860

Streamlit (For dashboards)

# Dependencies
pip install streamlit

# Simple implementation
import streamlit as st
st.title("My Dashboard")
text = st.text_input("Enter text")
if text: st.write(f"Result: {text}")
# Run with: streamlit run app.py --server.port 7860 --server.address 0.0.0.0

πŸ”§ Configuration

Environment Variables

# Core settings
SERVICE_PORT=7860                    # Port (default: 7860)
MODEL_NAME=microsoft/DialoGPT-medium # HuggingFace model
SERVICE_TITLE="My AI Service"        # Interface title
HF_TOKEN=your_token                  # For private models

# Optional services
FILEBROWSER_ENABLED=true             # File management
PERSISTENCE_ENABLED=true             # Data backup
DATASET_ID=username/dataset          # Backup dataset

Popular Models

# Chat models
MODEL_NAME=microsoft/DialoGPT-medium
MODEL_NAME=facebook/blenderbot-400M-distill

# Text analysis
MODEL_NAME=cardiffnlp/twitter-roberta-base-sentiment-latest
MODEL_NAME=sentence-transformers/all-MiniLM-L6-v2

🌐 Deploy to Hugging Face Spaces

  1. Create Space: Go to HuggingFace Spaces β†’ "Create new Space" β†’ Choose "Docker"
  2. Upload Code: Push your repository to the Space
  3. Set Variables: Add environment variables in Space settings:
    MODEL_NAME=your-model
    SERVICE_TITLE=Your Service
    
  4. Update README: Add this header to your Space's README.md: ```yaml

    title: Your Service Name emoji: πŸ€– sdk: docker app_port: 7860

    
    

πŸ› οΈ Optional Services

Service Enable Purpose
File Browser FILEBROWSER_ENABLED=true Web file management (admin/admin)
Data Persistence PERSISTENCE_ENABLED=true Auto backup to HF Datasets
Cloudflare Tunnel CLOUDFLARED_ENABLED=true Secure external access

Required for persistence:

HF_TOKEN=your_token
DATASET_ID=username/dataset

πŸ› Troubleshooting

Problem Solution
Service won't start Check logs: docker logs container_name
Port already in use Use different port: -p 7861:7860
Model not found Verify model name on HuggingFace
Out of memory Use smaller model or increase Docker memory

πŸ“š Next Steps

  • Read Documentation: Development Guide β€’ Usage Guide
  • Customize: Edit scripts/start/core-service-start.sh for your logic
  • Deploy: Push to HuggingFace Spaces for public access

Happy building! πŸš€