nothingworry commited on
Commit
916e12b
Β·
1 Parent(s): 93e2b71

Deploy to HF Spaces

Browse files
Files changed (5) hide show
  1. .dockerignore +51 -0
  2. DEPLOY_HF_SPACES.md +232 -0
  3. Dockerfile +100 -0
  4. HF_SPACES_SUMMARY.md +139 -0
  5. README_HF_SPACES.md +114 -0
.dockerignore ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Virtual environment
2
+ venv/
3
+ env/
4
+ .venv/
5
+
6
+ # Python cache
7
+ __pycache__/
8
+ *.py[cod]
9
+ *$py.class
10
+ *.so
11
+ .Python
12
+
13
+ # Environment files with secrets
14
+ .env
15
+ .env.local
16
+ .env.*.local
17
+
18
+ # IDE files
19
+ .vscode/
20
+ .idea/
21
+ *.swp
22
+ *.swo
23
+ *~
24
+
25
+ # OS files
26
+ .DS_Store
27
+ Thumbs.db
28
+
29
+ # Git
30
+ .git/
31
+ .gitignore
32
+
33
+ # Documentation (optional - include if you want)
34
+ # *.md
35
+ # docs/
36
+
37
+ # Test files (optional)
38
+ # tests/
39
+ # test_*.py
40
+
41
+ # Logs
42
+ *.log
43
+ logs/
44
+
45
+ # Database files (optional - include if you want example data)
46
+ # data/*.db
47
+
48
+ # Temporary files
49
+ *.tmp
50
+ *.temp
51
+
DEPLOY_HF_SPACES.md ADDED
@@ -0,0 +1,232 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Deploying IntegraChat to Hugging Face Spaces
2
+
3
+ This guide walks you through deploying IntegraChat as a Hugging Face Space.
4
+
5
+ ## πŸ“‹ Prerequisites
6
+
7
+ 1. **Hugging Face Account**: Sign up at [huggingface.co](https://huggingface.co)
8
+ 2. **Hugging Face Token**: Get your token from [Settings β†’ Access Tokens](https://huggingface.co/settings/tokens)
9
+ 3. **Required Services** (configure via environment variables):
10
+ - PostgreSQL with pgvector (for RAG storage)
11
+ - Ollama (local LLM) or Groq API (cloud LLM)
12
+ - Optional: Supabase (for production storage)
13
+ - Optional: Google Custom Search API (for web search)
14
+
15
+ ## πŸš€ Step-by-Step Deployment
16
+
17
+ ### Step 1: Create a New Space
18
+
19
+ 1. Go to [https://huggingface.co/new-space](https://huggingface.co/new-space)
20
+ 2. Fill in the form:
21
+ - **Space name**: `integrachat` (or your preferred name)
22
+ - **SDK**: Select **Docker**
23
+ - **Hardware**: Choose based on your needs:
24
+ - **CPU basic** - For testing (free tier)
25
+ - **CPU upgrade** - Better performance (paid)
26
+ - **GPU** - If you need GPU acceleration (paid)
27
+ - **Visibility**: Public or Private
28
+ 3. Click **Create Space**
29
+
30
+ ### Step 2: Prepare Your Repository
31
+
32
+ Your project structure should look like this:
33
+
34
+ ```
35
+ IntegraChat/
36
+ β”œβ”€β”€ Dockerfile βœ… (created)
37
+ β”œβ”€β”€ .dockerignore βœ… (created)
38
+ β”œβ”€β”€ README_HF_SPACES.md βœ… (created)
39
+ β”œβ”€β”€ requirements.txt βœ… (already exists)
40
+ β”œβ”€β”€ app.py βœ… (already exists)
41
+ β”œβ”€β”€ env.example βœ… (already exists)
42
+ β”œβ”€β”€ LICENSE βœ… (already exists)
43
+ β”œβ”€β”€ README.md βœ… (already exists)
44
+ β”œβ”€β”€ assets/
45
+ β”‚ └── banner.png βœ… (already exists)
46
+ β”œβ”€β”€ backend/ βœ… (entire directory)
47
+ └── scripts/ βœ… (if you have any)
48
+ ```
49
+
50
+ ### Step 3: Push to Hugging Face
51
+
52
+ #### Option A: Using Git (Recommended)
53
+
54
+ ```bash
55
+ # Initialize git if not already done
56
+ git init
57
+
58
+ # Add Hugging Face remote
59
+ git remote add hf https://huggingface.co/spaces/<your-username>/<space-name>
60
+
61
+ # Add all files (except venv)
62
+ git add Dockerfile .dockerignore README_HF_SPACES.md requirements.txt app.py env.example LICENSE README.md assets/ backend/ scripts/
63
+
64
+ # Commit
65
+ git commit -m "Initial commit for HF Spaces deployment"
66
+
67
+ # Push to Hugging Face
68
+ git push hf main
69
+ ```
70
+
71
+ #### Option B: Using Hugging Face Web Interface
72
+
73
+ 1. Go to your Space page
74
+ 2. Click **Files and versions** tab
75
+ 3. Click **Upload files**
76
+ 4. Upload all necessary files (drag and drop or select files)
77
+
78
+ ### Step 4: Configure Environment Variables
79
+
80
+ 1. Go to your Space page
81
+ 2. Click **Settings** tab
82
+ 3. Scroll to **Repository secrets** section
83
+ 4. Add the following environment variables:
84
+
85
+ #### Required Variables
86
+
87
+ ```env
88
+ POSTGRESQL_URL=postgresql://user:password@host:port/database
89
+ OLLAMA_URL=http://your-ollama-server:11434
90
+ OLLAMA_MODEL=llama3.1:latest
91
+ ```
92
+
93
+ **OR** (if using Groq instead of Ollama):
94
+
95
+ ```env
96
+ GROQ_API_KEY=your_groq_api_key
97
+ LLM_BACKEND=groq
98
+ ```
99
+
100
+ #### Optional Variables
101
+
102
+ ```env
103
+ # Supabase (for production storage)
104
+ SUPABASE_URL=https://your-project.supabase.co
105
+ SUPABASE_SERVICE_KEY=your_service_role_key
106
+
107
+ # Google Custom Search (for web search)
108
+ GOOGLE_SEARCH_API_KEY=your_google_api_key
109
+ GOOGLE_SEARCH_CX_ID=your_search_engine_id
110
+
111
+ # MCP Server Configuration
112
+ MCP_PORT=8900
113
+ MCP_HOST=0.0.0.0
114
+
115
+ # API Configuration
116
+ API_PORT=8000
117
+ BACKEND_BASE_URL=http://localhost:8000
118
+
119
+ # Memory Configuration
120
+ MCP_MEMORY_MAX_ITEMS=10
121
+ MCP_MEMORY_TTL_SECONDS=900
122
+
123
+ # Logging
124
+ LOG_LEVEL=info
125
+ APP_ENV=production
126
+ ```
127
+
128
+ ### Step 5: Wait for Build
129
+
130
+ 1. After pushing, Hugging Face will automatically start building your Docker image
131
+ 2. You can monitor the build progress in the **Logs** tab
132
+ 3. Build typically takes 5-10 minutes for the first time
133
+ 4. Once built, your Space will be available at:
134
+ `https://huggingface.co/spaces/<your-username>/<space-name>`
135
+
136
+ ### Step 6: Verify Deployment
137
+
138
+ 1. **Check Logs**: Go to the **Logs** tab to see if all services started correctly
139
+ 2. **Test UI**: Open your Space URL and verify the Gradio UI loads
140
+ 3. **Test API**: Try accessing `https://<your-space-url>/api/docs` for FastAPI docs
141
+ 4. **Test Health**: Check `https://<your-space-url>/api/health` for backend health
142
+
143
+ ## πŸ”§ Troubleshooting
144
+
145
+ ### Build Fails
146
+
147
+ - **Check Dockerfile syntax**: Ensure all commands are valid
148
+ - **Check requirements.txt**: Verify all packages are available on PyPI
149
+ - **Check logs**: Review build logs for specific errors
150
+ - **Common issues**:
151
+ - Missing system dependencies β†’ Add to `apt-get install` in Dockerfile
152
+ - Python version mismatch β†’ Update `FROM python:3.10-slim` if needed
153
+ - Port conflicts β†’ Ensure ports 7860, 8000, 8900 are exposed
154
+
155
+ ### Services Not Starting
156
+
157
+ - **Check environment variables**: Ensure all required vars are set
158
+ - **Check service logs**: Review logs for MCP server and FastAPI errors
159
+ - **Database connection**: Verify PostgreSQL URL is correct and accessible
160
+ - **LLM connection**: Verify Ollama URL or Groq API key is valid
161
+
162
+ ### UI Not Loading
163
+
164
+ - **Check Gradio logs**: Look for errors in the Logs tab
165
+ - **Check port binding**: Ensure Gradio binds to `0.0.0.0:7860`
166
+ - **Check backend connection**: Verify `BACKEND_BASE_URL` is correct
167
+
168
+ ### API Endpoints Not Working
169
+
170
+ - **Check FastAPI logs**: Review backend startup logs
171
+ - **Check MCP server**: Ensure MCP server is running on port 8900
172
+ - **Check CORS**: Verify CORS middleware is configured correctly
173
+ - **Check headers**: Ensure `x-tenant-id` and `x-user-role` headers are sent
174
+
175
+ ## πŸ“ Important Notes
176
+
177
+ 1. **Port Configuration**:
178
+ - Hugging Face Spaces automatically maps port 7860 to the public URL
179
+ - Internal services (FastAPI on 8000, MCP on 8900) are accessible within the container
180
+ - Use `localhost` for inter-service communication
181
+
182
+ 2. **Database Access**:
183
+ - Your PostgreSQL database must be accessible from Hugging Face's servers
184
+ - Consider using a cloud database (Supabase, AWS RDS, etc.)
185
+ - Ensure firewall rules allow connections from Hugging Face IPs
186
+
187
+ 3. **LLM Access**:
188
+ - If using Ollama, it must be accessible from Hugging Face servers
189
+ - Consider using Groq API for cloud-based LLM access
190
+ - Or use Hugging Face's Inference API
191
+
192
+ 4. **Resource Limits**:
193
+ - Free tier has CPU and memory limits
194
+ - Consider upgrading for production use
195
+ - Monitor resource usage in the Space settings
196
+
197
+ 5. **Secrets Management**:
198
+ - Never commit `.env` files with secrets
199
+ - Use Hugging Face Space secrets for sensitive data
200
+ - Use `env.example` as a template
201
+
202
+ ## 🎯 Next Steps
203
+
204
+ 1. **Customize README**: Update `README_HF_SPACES.md` with your specific details
205
+ 2. **Add Banner**: Upload a banner image to `assets/banner.png`
206
+ 3. **Test Thoroughly**: Test all features in the deployed environment
207
+ 4. **Monitor Usage**: Check Space analytics for usage patterns
208
+ 5. **Update Documentation**: Keep documentation in sync with deployment
209
+
210
+ ## πŸ“š Additional Resources
211
+
212
+ - [Hugging Face Spaces Documentation](https://huggingface.co/docs/hub/spaces)
213
+ - [Docker on Hugging Face Spaces](https://huggingface.co/docs/hub/spaces-sdks-docker)
214
+ - [Environment Variables Guide](https://huggingface.co/docs/hub/spaces-config#environment-variables)
215
+
216
+ ## βœ… Checklist
217
+
218
+ Before deploying, ensure:
219
+
220
+ - [ ] Dockerfile is present and valid
221
+ - [ ] .dockerignore excludes venv and .env
222
+ - [ ] requirements.txt includes all dependencies
223
+ - [ ] Environment variables are documented
224
+ - [ ] Database is accessible from Hugging Face servers
225
+ - [ ] LLM service is configured (Ollama or Groq)
226
+ - [ ] README_HF_SPACES.md is customized
227
+ - [ ] All code is committed and pushed
228
+
229
+ ---
230
+
231
+ **Need Help?** Check the [Troubleshooting](#-troubleshooting) section or open an issue in the repository.
232
+
Dockerfile ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.10-slim
2
+
3
+ WORKDIR /app
4
+
5
+ # Install system dependencies
6
+ RUN apt-get update && apt-get install -y \
7
+ gcc \
8
+ g++ \
9
+ postgresql-client \
10
+ curl \
11
+ && rm -rf /var/lib/apt/lists/*
12
+
13
+ # Copy requirements and install Python dependencies
14
+ COPY requirements.txt .
15
+ RUN pip install --no-cache-dir -r requirements.txt
16
+
17
+ # Copy application code
18
+ COPY . .
19
+
20
+ # Create data directory for SQLite fallback
21
+ RUN mkdir -p /app/data
22
+
23
+ # Expose ports
24
+ # - 7860: Gradio UI (default HF Spaces port)
25
+ # - 8000: FastAPI backend
26
+ # - 8900: MCP server
27
+ EXPOSE 7860 8000 8900
28
+
29
+ # Set environment variables with defaults
30
+ ENV PYTHONPATH=/app
31
+ ENV API_PORT=8000
32
+ ENV MCP_PORT=8900
33
+ ENV BACKEND_BASE_URL=http://localhost:8000
34
+ ENV RAG_MCP_URL=http://localhost:8900/rag
35
+ ENV WEB_MCP_URL=http://localhost:8900/web
36
+ ENV ADMIN_MCP_URL=http://localhost:8900/admin
37
+
38
+ # Create startup script that runs all services
39
+ RUN echo '#!/bin/bash\n\
40
+ set -e\n\
41
+ \n\
42
+ # Function to wait for a service to be ready\n\
43
+ wait_for_service() {\n\
44
+ local url=$1\n\
45
+ local max_attempts=30\n\
46
+ local attempt=0\n\
47
+ \n\
48
+ echo "Waiting for $url to be ready..."\n\
49
+ while [ $attempt -lt $max_attempts ]; do\n\
50
+ if curl -s -f "$url" > /dev/null 2>&1; then\n\
51
+ echo "$url is ready!"\n\
52
+ return 0\n\
53
+ fi\n\
54
+ attempt=$((attempt + 1))\n\
55
+ sleep 1\n\
56
+ done\n\
57
+ \n\
58
+ echo "Warning: $url did not become ready after $max_attempts attempts"\n\
59
+ return 1\n\
60
+ }\n\
61
+ \n\
62
+ # Start MCP server in background\n\
63
+ echo "Starting MCP server on port $MCP_PORT..."\n\
64
+ python -m backend.mcp_server.server &\n\
65
+ MCP_PID=$!\n\
66
+ \n\
67
+ # Wait for MCP server to be ready\n\
68
+ sleep 5\n\
69
+ if ! kill -0 $MCP_PID 2>/dev/null; then\n\
70
+ echo "Error: MCP server failed to start"\n\
71
+ exit 1\n\
72
+ fi\n\
73
+ \n\
74
+ # Start FastAPI backend in background\n\
75
+ echo "Starting FastAPI backend on port $API_PORT..."\n\
76
+ uvicorn backend.api.main:app --host 0.0.0.0 --port $API_PORT &\n\
77
+ API_PID=$!\n\
78
+ \n\
79
+ # Wait for FastAPI to be ready\n\
80
+ sleep 5\n\
81
+ if ! kill -0 $API_PID 2>/dev/null; then\n\
82
+ echo "Error: FastAPI backend failed to start"\n\
83
+ exit 1\n\
84
+ fi\n\
85
+ \n\
86
+ # Wait for health check\n\
87
+ wait_for_service "http://localhost:$API_PORT/health" || true\n\
88
+ \n\
89
+ # Start Gradio UI (foreground - this is the main process)\n\
90
+ echo "Starting Gradio UI on port 7860..."\n\
91
+ echo "All services are running:"\n\
92
+ echo " - MCP Server: http://localhost:$MCP_PORT"\n\
93
+ echo " - FastAPI Backend: http://localhost:$API_PORT"\n\
94
+ echo " - Gradio UI: http://localhost:7860"\n\
95
+ python app.py\n\
96
+ ' > /app/start.sh && chmod +x /app/start.sh
97
+
98
+ # Use the startup script as entrypoint
99
+ CMD ["/app/start.sh"]
100
+
HF_SPACES_SUMMARY.md ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # βœ… Hugging Face Spaces Deployment - Complete!
2
+
3
+ Your IntegraChat project is now ready to deploy to Hugging Face Spaces! Here's what has been created:
4
+
5
+ ## πŸ“¦ Files Created
6
+
7
+ ### 1. **Dockerfile** βœ…
8
+ - Multi-service Docker container
9
+ - Runs MCP server, FastAPI backend, and Gradio UI
10
+ - Includes health checks and service coordination
11
+ - Optimized for Hugging Face Spaces
12
+
13
+ ### 2. **.dockerignore** βœ…
14
+ - Excludes `venv/`, `.env`, and other unnecessary files
15
+ - Reduces build size and prevents secret leaks
16
+
17
+ ### 3. **README_HF_SPACES.md** βœ…
18
+ - Optimized README for Hugging Face Spaces
19
+ - Includes Space metadata (emoji, colors, SDK type)
20
+ - Quick start guide and feature overview
21
+
22
+ ### 4. **DEPLOY_HF_SPACES.md** βœ…
23
+ - Complete step-by-step deployment guide
24
+ - Troubleshooting section
25
+ - Environment variable configuration
26
+ - Best practices
27
+
28
+ ## πŸš€ Quick Deployment Steps
29
+
30
+ 1. **Create Space**: Go to [huggingface.co/new-space](https://huggingface.co/new-space)
31
+ - Choose **Docker** as SDK
32
+ - Set hardware (CPU basic for testing, upgrade for production)
33
+
34
+ 2. **Push Code**:
35
+ ```bash
36
+ git init
37
+ git remote add hf https://huggingface.co/spaces/<username>/<space-name>
38
+ git add Dockerfile .dockerignore README_HF_SPACES.md requirements.txt app.py env.example LICENSE README.md assets/ backend/ scripts/
39
+ git commit -m "Deploy to HF Spaces"
40
+ git push hf main
41
+ ```
42
+
43
+ 3. **Configure Secrets**: In Space Settings β†’ Repository secrets, add:
44
+ - `POSTGRESQL_URL`
45
+ - `OLLAMA_URL` (or `GROQ_API_KEY`)
46
+ - `OLLAMA_MODEL`
47
+ - Optional: `SUPABASE_URL`, `SUPABASE_SERVICE_KEY`, etc.
48
+
49
+ 4. **Wait for Build**: Monitor build progress in the Logs tab (5-10 minutes)
50
+
51
+ 5. **Access Your Space**: `https://huggingface.co/spaces/<username>/<space-name>`
52
+
53
+ ## πŸ—οΈ Architecture in Docker
54
+
55
+ The Dockerfile runs three services:
56
+
57
+ ```
58
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
59
+ β”‚ Docker Container β”‚
60
+ β”‚ β”‚
61
+ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
62
+ β”‚ β”‚ MCP Server β”‚ Port 8900 β”‚
63
+ β”‚ β”‚ (Background) β”‚ β”‚
64
+ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
65
+ β”‚ β”‚
66
+ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
67
+ β”‚ β”‚ FastAPI β”‚ Port 8000 β”‚
68
+ β”‚ β”‚ (Background) β”‚ β”‚
69
+ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
70
+ β”‚ β”‚
71
+ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
72
+ β”‚ β”‚ Gradio UI β”‚ Port 7860 β”‚
73
+ β”‚ β”‚ (Foreground) β”‚ ← Main Entry β”‚
74
+ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
75
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
76
+ ```
77
+
78
+ ## πŸ“‹ What's Included
79
+
80
+ βœ… **Dockerfile** - Production-ready multi-service container
81
+ βœ… **.dockerignore** - Excludes unnecessary files
82
+ βœ… **README_HF_SPACES.md** - Space-optimized documentation
83
+ βœ… **DEPLOY_HF_SPACES.md** - Complete deployment guide
84
+ βœ… **Health checks** - Service readiness verification
85
+ βœ… **Error handling** - Graceful service startup
86
+ βœ… **Environment variables** - Configurable via HF Space settings
87
+
88
+ ## πŸ”§ Configuration
89
+
90
+ All configuration is done via environment variables in Hugging Face Space settings:
91
+
92
+ ### Required
93
+ - `POSTGRESQL_URL` - Database connection
94
+ - `OLLAMA_URL` + `OLLAMA_MODEL` - OR `GROQ_API_KEY`
95
+
96
+ ### Optional
97
+ - `SUPABASE_URL` + `SUPABASE_SERVICE_KEY` - Production storage
98
+ - `GOOGLE_SEARCH_API_KEY` + `GOOGLE_SEARCH_CX_ID` - Web search
99
+ - `MCP_PORT`, `API_PORT` - Service ports (defaults work)
100
+
101
+ ## πŸ“š Next Steps
102
+
103
+ 1. **Review Files**: Check all created files match your needs
104
+ 2. **Test Locally**: Build Docker image locally to test:
105
+ ```bash
106
+ docker build -t integrachat .
107
+ docker run -p 7860:7860 -p 8000:8000 -p 8900:8900 integrachat
108
+ ```
109
+ 3. **Deploy**: Follow steps in `DEPLOY_HF_SPACES.md`
110
+ 4. **Monitor**: Check logs and analytics after deployment
111
+
112
+ ## 🎯 Key Features
113
+
114
+ - βœ… **Multi-service orchestration** - All services run in one container
115
+ - βœ… **Health checks** - Services wait for each other to be ready
116
+ - βœ… **Error handling** - Graceful failures and logging
117
+ - βœ… **Production-ready** - Optimized for Hugging Face Spaces
118
+ - βœ… **Configurable** - All settings via environment variables
119
+
120
+ ## πŸ’‘ Tips
121
+
122
+ 1. **First Build**: May take 10-15 minutes (downloads dependencies)
123
+ 2. **Subsequent Builds**: Faster (cached layers)
124
+ 3. **Logs**: Check Logs tab for detailed startup information
125
+ 4. **Database**: Ensure PostgreSQL is accessible from HF servers
126
+ 5. **LLM**: Consider Groq API for cloud-based LLM (no local server needed)
127
+
128
+ ## πŸ†˜ Need Help?
129
+
130
+ - Check `DEPLOY_HF_SPACES.md` for detailed troubleshooting
131
+ - Review Dockerfile comments for service configuration
132
+ - Check Hugging Face Spaces documentation for platform-specific issues
133
+
134
+ ---
135
+
136
+ **Your project is ready to deploy! πŸš€**
137
+
138
+ Follow the steps in `DEPLOY_HF_SPACES.md` to get started.
139
+
README_HF_SPACES.md ADDED
@@ -0,0 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: IntegraChat
3
+ emoji: πŸ€–
4
+ colorFrom: blue
5
+ colorTo: purple
6
+ sdk: docker
7
+ app_port: 7860
8
+ pinned: false
9
+ license: mit
10
+ ---
11
+
12
+ # IntegraChat β€” Enterprise MCP Autonomous Agent Platform
13
+
14
+ **IntegraChat** is an enterprise-grade, multi-tenant AI platform that demonstrates the full capabilities of the **Model Context Protocol (MCP)** in a production-style environment.
15
+
16
+ ## πŸš€ Quick Start
17
+
18
+ This Hugging Face Space runs the complete IntegraChat stack:
19
+ - **Gradio UI** on port 7860 (main interface)
20
+ - **FastAPI Backend** on port 8000 (API endpoints)
21
+ - **Unified MCP Server** on port 8900 (RAG/Web/Admin tools)
22
+
23
+ ## ✨ Features
24
+
25
+ - πŸ€– **Autonomous Multi-Step MCP Agents** – Intelligent tool-aware agent with conversation memory
26
+ - πŸ“š **Enhanced Knowledge Base Management** – Upload documents (PDF/DOCX/TXT/MD) with AI-generated metadata
27
+ - πŸ” **Optimized RAG Search** – Cross-encoder re-ranking for improved accuracy
28
+ - πŸ›‘οΈ **Enterprise Admin Governance** – Regex-based red-flag detection with LLM enhancement
29
+ - πŸ“Š **Comprehensive Analytics** – Real-time visualizations and tenant-level metrics
30
+ - 🌐 **Live Web Search** – Google Programmable Search integration
31
+ - 🏒 **Multi-Tenant Isolation** – Complete tenant isolation with role-based access control
32
+
33
+ ## πŸ“– Usage
34
+
35
+ 1. **Enter Tenant ID**: Set your tenant ID in the UI (top of the page)
36
+ 2. **Select Role**: Choose your role (viewer, editor, admin, owner) from the dropdown
37
+ 3. **Start Chatting**: Use the Chat tab to interact with the agent
38
+ 4. **Ingest Documents**: Upload documents in the Document Ingestion tab (requires editor+ role)
39
+ 5. **Manage Rules**: Add admin rules in the Admin Rules tab (requires admin+ role)
40
+ 6. **View Analytics**: Check analytics in the Admin Analytics tab
41
+
42
+ ## πŸ”§ Configuration
43
+
44
+ Set these environment variables in your Hugging Face Space settings:
45
+
46
+ ### Required
47
+ - `POSTGRESQL_URL` - PostgreSQL connection string with pgvector extension
48
+ - `OLLAMA_URL` - Ollama server URL (or use Groq API)
49
+ - `OLLAMA_MODEL` - Model name (e.g., `llama3.1:latest`)
50
+
51
+ ### Optional
52
+ - `SUPABASE_URL` - Supabase project URL (for production storage)
53
+ - `SUPABASE_SERVICE_KEY` - Supabase service role key
54
+ - `GOOGLE_SEARCH_API_KEY` - Google Custom Search API key
55
+ - `GOOGLE_SEARCH_CX_ID` - Google Custom Search Engine ID
56
+ - `GROQ_API_KEY` - Groq API key (alternative to Ollama)
57
+ - `LLM_BACKEND` - `ollama` or `groq` (default: `ollama`)
58
+
59
+ ## πŸ“š API Endpoints
60
+
61
+ The FastAPI backend is available at `/api` (relative to the Space URL):
62
+
63
+ - `POST /api/agent/message` - Main chat endpoint
64
+ - `POST /api/rag/ingest-document` - Ingest documents
65
+ - `GET /api/rag/list` - List documents
66
+ - `POST /api/admin/rules` - Manage admin rules
67
+ - `GET /api/analytics/overview` - View analytics
68
+
69
+ Full API docs available at `/api/docs` when the backend is running.
70
+
71
+ ## πŸ—οΈ Architecture
72
+
73
+ ```
74
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
75
+ β”‚ Gradio UI β”‚ Port 7860
76
+ β”‚ (Main Entry) β”‚
77
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
78
+ β”‚
79
+ β–Ό
80
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
81
+ β”‚ FastAPI Backendβ”‚ Port 8000
82
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
83
+ β”‚
84
+ β”œβ”€β”€β–Ί MCP Server (Port 8900)
85
+ β”‚ β”œβ”€β”€ RAG Tools
86
+ β”‚ β”œβ”€β”€ Web Tools
87
+ β”‚ └── Admin Tools
88
+ β”‚
89
+ β”œβ”€β”€β–Ί PostgreSQL (RAG)
90
+ β”œβ”€β”€β–Ί Supabase/SQLite (Rules & Analytics)
91
+ └──► LLM (Ollama/Groq)
92
+ ```
93
+
94
+ ## πŸ” Role-Based Access Control
95
+
96
+ - **viewer** - Basic chat access
97
+ - **editor** - Can ingest documents
98
+ - **admin** - Can manage rules and delete documents
99
+ - **owner** - Full system access
100
+
101
+ ## πŸ“ License
102
+
103
+ MIT License - see [LICENSE](LICENSE) file for details.
104
+
105
+ ## πŸ”— Links
106
+
107
+ - [Model Context Protocol](https://modelcontextprotocol.io/)
108
+ - [Full Documentation](README.md)
109
+ - [Backend Documentation](backend/README.md)
110
+
111
+ ---
112
+
113
+ **Made with ❀️ for the MCP Hackathon**
114
+