Spaces:
Running
Running
File size: 5,278 Bytes
0a5dcf9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 |
# Hugging Face Deployment Guide
## Overview
This guide explains how to deploy the Lung Cancer Clinical Decision Support System to Hugging Face Spaces.
## Prerequisites
- Hugging Face account
- Git installed locally
- OpenAI API key (for the agent)
- GitHub Personal Access Token (for side effects storage)
## Deployment Steps
### 1. Create a New Hugging Face Space
1. Go to [Hugging Face Spaces](https://huggingface.co/spaces)
2. Click "Create new Space"
3. Configure:
- **Space name**: `moazx-api` (or your preferred name)
- **License**: Choose appropriate license
- **SDK**: Docker
- **Hardware**: CPU Basic (or upgrade as needed)
### 2. Configure Environment Variables
In your Hugging Face Space settings, add these secrets:
```bash
OPENAI_API_KEY=your_openai_api_key_here
GITHUB_TOKEN=your_github_token_here
GITHUB_REPO=your_username/your_repo_name
GITHUB_BRANCH=main
PORT=7860
```
### 3. Deploy the Application
#### Option A: Direct Push to Hugging Face
```bash
# Clone your Hugging Face Space repository
git clone https://huggingface.co/spaces/YOUR_USERNAME/moazx-api
cd moazx-api
# Copy all backend files
cp -r /path/to/backend/* .
# Add and commit
git add .
git commit -m "Initial deployment"
git push
```
#### Option B: Using Hugging Face CLI
```bash
# Install Hugging Face CLI
pip install huggingface_hub
# Login
huggingface-cli login
# Push to Space
huggingface-cli upload YOUR_USERNAME/moazx-api . --repo-type=space
```
### 4. Verify Deployment
1. Wait for the Space to build (check the logs)
2. Once running, test the API:
- Visit: `https://YOUR_USERNAME-moazx-api.hf.space`
- Check health: `https://YOUR_USERNAME-moazx-api.hf.space/health`
- View docs: `https://YOUR_USERNAME-moazx-api.hf.space/docs`
### 5. Deploy Frontend
The frontend is configured to use the API at `https://moazx-api.hf.space`.
#### Option A: Serve from the same Space
The frontend files are already in the `/frontend` directory and will be served automatically.
#### Option B: Deploy to separate hosting
Deploy the frontend folder to:
- Netlify
- Vercel
- GitHub Pages
- Any static hosting service
## API Endpoints
Once deployed, your API will be available at:
```
Base URL: https://moazx-api.hf.space
Endpoints:
- GET / - API information
- GET /health - Health check
- GET /health/initialization - Initialization status
- POST /auth/login - User login
- POST /auth/logout - User logout
- GET /auth/status - Authentication status
- GET /ask - Ask a question (non-streaming)
- GET /ask/stream - Ask a question (streaming)
- GET /export/{format} - Export conversation
```
## Frontend Configuration
The frontend is already configured to use the Hugging Face API:
```javascript
// In frontend/script.js
this.apiBase = 'https://moazx-api.hf.space';
```
## Authentication
The system uses session-based authentication:
1. Default credentials (change in production):
- Username: `admin`
- Password: `admin123`
2. To change credentials, update `api/routers/auth.py`
## Monitoring
Monitor your deployment:
1. **Hugging Face Space Logs**: Check the logs tab in your Space
2. **API Health**: Monitor `/health` endpoint
3. **Initialization Status**: Check `/health/initialization`
## Troubleshooting
### Issue: Space fails to build
- Check Dockerfile syntax
- Verify all dependencies in requirements.txt
- Check Space logs for specific errors
### Issue: API returns 500 errors
- Verify environment variables are set correctly
- Check that OPENAI_API_KEY is valid
- Review application logs
### Issue: CORS errors in frontend
- Verify CORS middleware configuration in `api/middleware.py`
- Ensure frontend URL is in allowed origins
### Issue: Slow initialization
- The system loads models in the background
- Check `/health/initialization` for status
- Consider upgrading to better hardware tier
## Performance Optimization
### For Better Performance:
1. Upgrade to GPU hardware tier (for faster embeddings)
2. Use persistent storage for cached data
3. Enable CDN for frontend assets
### Memory Management:
- Current setup uses CPU-optimized models
- Faiss-cpu for vector search
- Sentence-transformers for embeddings
## Security Considerations
1. **Change default credentials** in production
2. **Rotate API keys** regularly
3. **Enable rate limiting** (already configured)
4. **Use HTTPS** (automatic on Hugging Face)
5. **Review CORS settings** for production
## Updating the Deployment
To update your deployment:
```bash
# Make changes locally
git add .
git commit -m "Update description"
git push
# Hugging Face will automatically rebuild
```
## Cost Considerations
- **Free tier**: CPU Basic (limited resources)
- **Paid tiers**: Better performance and reliability
- **API costs**: OpenAI API usage (pay per token)
## Support
For issues:
1. Check Hugging Face Space logs
2. Review application logs at `/logs/app.log`
3. Test endpoints using `/docs` (Swagger UI)
## Additional Resources
- [Hugging Face Spaces Documentation](https://huggingface.co/docs/hub/spaces)
- [FastAPI Documentation](https://fastapi.tiangolo.com/)
- [Docker Documentation](https://docs.docker.com/)
|