Development Guide
Project Structure
/
├── scripts/
│ ├── install/ # Installation scripts (*-install.sh)
│ │ ├── core-service-install.sh # Your service dependencies
│ │ ├── filebrowser-install.sh # File browser
│ │ └── persistence-install.sh # Data persistence
│ └── start/ # Service startup scripts (*-start.sh)
│ ├── core-service-start.sh # Your main service (IMPLEMENT THIS)
│ ├── filebrowser-start.sh # File browser startup
│ └── persistence-start.sh # Persistence service
├── configs/ # Configuration template files
│ └── templates/ # Service configuration templates
│ ├── aria2.conf.template # aria2 configuration template
│ ├── restic.conf.template # restic configuration template
│ └── webdav.toml.template # webdav configuration template
├── Dockerfile # Container build configuration
└── docker-entrypoint.sh # Main container entry point
Core Service Implementation
The core service is implemented in scripts/start/core-service-start.sh and contains your main application logic.
Basic Implementation
#!/bin/bash
set -e
# Log function
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [CORE-SERVICE] $*"
}
# Activate Python virtual environment
source /opt/venv/bin/activate
# Configuration from environment variables
SERVICE_PORT="${SERVICE_PORT:-7860}"
MODEL_NAME="${MODEL_NAME:-your-default-model}"
# Set up directories and cache
mkdir -p /home/user/models /home/user/cache
export HF_HOME=/home/user/cache
export TRANSFORMERS_CACHE=/home/user/cache
# Start your service
log "Starting service on port $SERVICE_PORT..."
exec python /home/user/your_app.py
Common Frameworks
Gradio Interface:
# In core-service-install.sh
pip install gradio
# In core-service-start.sh
exec python -c "
import gradio as gr
def predict(text): return f'Echo: {text}'
gr.Interface(predict, 'text', 'text').launch(server_name='0.0.0.0', server_port=7860)
"
FastAPI Service:
# In core-service-install.sh
pip install fastapi uvicorn
# In core-service-start.sh
exec uvicorn your_app:app --host 0.0.0.0 --port 7860
Streamlit Dashboard:
# In core-service-install.sh
pip install streamlit
# In core-service-start.sh
exec streamlit run your_app.py --server.port 7860 --server.address 0.0.0.0
Dependencies Installation
Add your specific dependencies to scripts/install/core-service-install.sh:
# =============================================================================
# CUSTOM DEPENDENCIES SECTION
# =============================================================================
log "Installing service dependencies..."
source /opt/venv/bin/activate
# Example: AI/ML service
pip install --no-cache-dir \
torch \
transformers \
gradio \
numpy
# Example: System libraries
apt-get install -y --no-install-recommends \
libssl-dev \
ffmpeg
Common dependency patterns:
- AI/ML:
torch transformers huggingface-hub - Web APIs:
fastapi uvicorn - Dashboards:
streamlit plotly - Computer vision:
opencv-python pillow
Environment Variables
# Core configuration
SERVICE_PORT="${SERVICE_PORT:-7860}"
MODEL_NAME="${MODEL_NAME:-microsoft/DialoGPT-medium}"
HF_TOKEN="${HF_TOKEN:-}"
DEBUG_MODE="${DEBUG_MODE:-false}"
# Custom variables
YOUR_API_KEY="${YOUR_API_KEY:-}"
YOUR_CONFIG_OPTION="${YOUR_CONFIG_OPTION:-default}"
Optional Services
Enable optional services via environment variables:
| Service | Enable | Purpose |
|---|---|---|
| File Browser | FILEBROWSER_ENABLED=true |
Web-based file management |
| Persistence | PERSISTENCE_ENABLED=true |
Automatic backup/restore with HF Datasets |
| Cloudflare Tunnel | CLOUDFLARED_ENABLED=true |
Secure external access |
| FRP Client | FRPC_ENABLED=true |
Reverse proxy for complex networking |
Required variables:
- Persistence:
HF_TOKEN,DATASET_ID - Cloudflare:
CLOUDFLARED_TUNNEL_TOKEN - FRP:
FRPC_SERVER_ADDR,FRPC_AUTH_TOKEN
Development Workflow
Local Development
# Build and test
docker build -t my-hf-service .
docker run -p 7860:7860 \
-e SERVICE_PORT=7860 \
-e MODEL_NAME=your-model \
-e DEBUG_MODE=true \
my-hf-service
Testing
# Test core service
docker run -p 7860:7860 my-hf-service
# Test with optional services
docker run -p 7860:7860 \
-e FILEBROWSER_ENABLED=true \
-e PERSISTENCE_ENABLED=true \
-e HF_TOKEN=your_token \
-e DATASET_ID=username/dataset \
my-hf-service
Deployment to HF Spaces
- Push your changes to repository
- Create a new Docker Space on Hugging Face
- Set environment variables in Space settings
- HF Spaces automatically builds and deploys
Adding Custom Services
1. Create Installation Script
Create scripts/install/your-service-install.sh:
#!/bin/bash
set -e
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [YOUR-SERVICE] $*"
}
log "Installing your service..."
# Installation steps here
log "Installation completed"
2. Create Startup Script
Create scripts/start/your-service-start.sh:
#!/bin/bash
set -e
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [YOUR-SERVICE] $*"
}
if [[ "${YOUR_SERVICE_ENABLED:-false}" == "true" ]]; then
log "Starting your service..."
exec your-service-binary
else
log "Service disabled"
exit 0
fi
The system automatically discovers and starts services based on the *_ENABLED environment variable pattern.
Troubleshooting
Common Issues
| Problem | Solution |
|---|---|
| Service not starting | Check logs and verify environment variables are set |
| Port conflicts | Ensure SERVICE_PORT matches exposed port (7860) |
| Model loading failures | Verify model name and HF_TOKEN permissions |
| Permission denied | Check file ownership: chown user:user /home/user/script/start/* |
Debugging Commands
# Check running processes
ps aux | grep -E "(python|your-service)"
# Check environment variables
env | grep -E "(SERVICE_|MODEL_|HF_)"
# Check logs
tail -f /tmp/*.log
# Test connectivity
curl -I http://localhost:7860
Best Practices
Code Organization
- Keep application code in
/home/user/ - Use
/home/user/models/for model files - Use
/home/user/cache/for temporary files - Use
/home/user/data/for persistent data
Development
- Use environment variables for configuration
- Provide sensible defaults
- Implement proper error handling
- Use HF Spaces secrets for sensitive data
- Test locally before deploying
- Document custom environment variables
配置模板系统
项目使用模板化配置系统,简化服务配置管理。所有服务配置模板都保存在configs/templates/目录中,启动脚本通过环境变量替换生成最终配置。
添加新的配置模板
- 为你的服务创建配置模板文件
configs/templates/your-service.conf.template - 在模板中使用环境变量格式
$VARIABLE_NAME作为可配置参数 - 在服务启动脚本中使用以下代码加载模板:
# Determine template path
TEMPLATE_PATH="/etc/hf_docker_template/configs/templates/your-service.conf.template"
if [ ! -f "$TEMPLATE_PATH" ]; then
TEMPLATE_PATH="$(dirname "$(dirname "$(dirname "$0")")")/configs/templates/your-service.conf.template"
fi
# Generate configuration from template
if [ -f "$TEMPLATE_PATH" ]; then
envsubst < "$TEMPLATE_PATH" > "/home/user/config/your-service.conf"
log "Using template from: $TEMPLATE_PATH"
else
log "ERROR: Template file not found"
exit 1
fi
模板最佳实践
- 使用明确的变量名称并提供默认值:
${YOUR_VARIABLE:-default_value} - 在模板中添加注释说明各参数的用途
- 将模板组织为逻辑部分(基础配置、网络、安全等)
- 遵循目标服务的配置文件格式和语法