Update the Dockerfile to simplify the directory structure, remove unnecessary file copying steps, and update the documentation to reflect how configuration files are used. Modified the restic startup script to directly reference configuration file paths within containers to ensure clearer logic for configuration file creation and path replacement.
429d345 Development Guide
Project Structure
/
βββ scripts/
β βββ install/ # Installation scripts (*-install.sh)
β β βββ core-service-install.sh # Your service dependencies
β β βββ filebrowser-install.sh # File browser
β β βββ persistence-install.sh # Data persistence
β βββ start/ # Service startup scripts (*-start.sh)
β βββ core-service-start.sh # Your main service (IMPLEMENT THIS)
β βββ filebrowser-start.sh # File browser startup
β βββ persistence-start.sh # Persistence service
βββ configs/ # Configuration template files (copied to /home/user/config/ at runtime)
βββ Dockerfile # Container build configuration
βββ docker-entrypoint.sh # Main container entry point
Core Service Implementation
The core service is implemented in scripts/start/core-service-start.sh and contains your main application logic.
Basic Implementation
#!/bin/bash
set -e
# Log function
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [CORE-SERVICE] $*"
}
# Activate Python virtual environment
source /opt/venv/bin/activate
# Configuration from environment variables
SERVICE_PORT="${SERVICE_PORT:-7860}"
MODEL_NAME="${MODEL_NAME:-your-default-model}"
# Set up directories and cache
mkdir -p /home/user/models /home/user/cache
export HF_HOME=/home/user/cache
export TRANSFORMERS_CACHE=/home/user/cache
# Start your service
log "Starting service on port $SERVICE_PORT..."
exec python /home/user/your_app.py
Common Frameworks
Gradio Interface:
# In core-service-install.sh
pip install gradio
# In core-service-start.sh
exec python -c "
import gradio as gr
def predict(text): return f'Echo: {text}'
gr.Interface(predict, 'text', 'text').launch(server_name='0.0.0.0', server_port=7860)
"
FastAPI Service:
# In core-service-install.sh
pip install fastapi uvicorn
# In core-service-start.sh
exec uvicorn your_app:app --host 0.0.0.0 --port 7860
Streamlit Dashboard:
# In core-service-install.sh
pip install streamlit
# In core-service-start.sh
exec streamlit run your_app.py --server.port 7860 --server.address 0.0.0.0
Dependencies Installation
Add your specific dependencies to scripts/install/core-service-install.sh:
# =============================================================================
# CUSTOM DEPENDENCIES SECTION
# =============================================================================
log "Installing service dependencies..."
source /opt/venv/bin/activate
# Example: AI/ML service
pip install --no-cache-dir \
torch \
transformers \
gradio \
numpy
# Example: System libraries
apt-get install -y --no-install-recommends \
libssl-dev \
ffmpeg
Common dependency patterns:
- AI/ML:
torch transformers huggingface-hub - Web APIs:
fastapi uvicorn - Dashboards:
streamlit plotly - Computer vision:
opencv-python pillow
Environment Variables
# Core configuration
SERVICE_PORT="${SERVICE_PORT:-7860}"
MODEL_NAME="${MODEL_NAME:-microsoft/DialoGPT-medium}"
HF_TOKEN="${HF_TOKEN:-}"
DEBUG_MODE="${DEBUG_MODE:-false}"
# Custom variables
YOUR_API_KEY="${YOUR_API_KEY:-}"
YOUR_CONFIG_OPTION="${YOUR_CONFIG_OPTION:-default}"
Optional Services
Enable optional services via environment variables:
| Service | Enable | Purpose |
|---|---|---|
| File Browser | FILEBROWSER_ENABLED=true |
Web-based file management |
| Persistence | PERSISTENCE_ENABLED=true |
Automatic backup/restore with HF Datasets |
| Cloudflare Tunnel | CLOUDFLARED_ENABLED=true |
Secure external access |
| FRP Client | FRPC_ENABLED=true |
Reverse proxy for complex networking |
Required variables:
- Persistence:
HF_TOKEN,DATASET_ID - Cloudflare:
CLOUDFLARED_TUNNEL_TOKEN - FRP:
FRPC_SERVER_ADDR,FRPC_AUTH_TOKEN
Development Workflow
Local Development
# Build and test
docker build -t my-hf-service .
docker run -p 7860:7860 \
-e SERVICE_PORT=7860 \
-e MODEL_NAME=your-model \
-e DEBUG_MODE=true \
my-hf-service
Testing
# Test core service
docker run -p 7860:7860 my-hf-service
# Test with optional services
docker run -p 7860:7860 \
-e FILEBROWSER_ENABLED=true \
-e PERSISTENCE_ENABLED=true \
-e HF_TOKEN=your_token \
-e DATASET_ID=username/dataset \
my-hf-service
Deployment to HF Spaces
- Push your changes to repository
- Create a new Docker Space on Hugging Face
- Set environment variables in Space settings
- HF Spaces automatically builds and deploys
Adding Custom Services
1. Create Installation Script
Create scripts/install/your-service-install.sh:
#!/bin/bash
set -e
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [YOUR-SERVICE] $*"
}
log "Installing your service..."
# Installation steps here
log "Installation completed"
2. Create Startup Script
Create scripts/start/your-service-start.sh:
#!/bin/bash
set -e
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [YOUR-SERVICE] $*"
}
if [[ "${YOUR_SERVICE_ENABLED:-false}" == "true" ]]; then
log "Starting your service..."
exec your-service-binary
else
log "Service disabled"
exit 0
fi
The system automatically discovers and starts services based on the *_ENABLED environment variable pattern.
Troubleshooting
Common Issues
| Problem | Solution |
|---|---|
| Service not starting | Check logs and verify environment variables are set |
| Port conflicts | Ensure SERVICE_PORT matches exposed port (7860) |
| Model loading failures | Verify model name and HF_TOKEN permissions |
| Permission denied | Check file ownership: chown user:user /home/user/scripts/start/* |
Debugging Commands
# Check running processes
ps aux | grep -E "(python|your-service)"
# Check environment variables
env | grep -E "(SERVICE_|MODEL_|HF_)"
# Check logs
tail -f /tmp/*.log
# Test connectivity
curl -I http://localhost:7860
Best Practices
Code Organization
- Keep application code in
/home/user/ - Use
/home/user/models/for model files - Use
/home/user/cache/for temporary files - Use
/home/user/data/for persistent data
Development
- Use environment variables for configuration
- Provide sensible defaults
- Implement proper error handling
- Use HF Spaces secrets for sensitive data
- Test locally before deploying
- Document custom environment variables