FocusFlow Assistant commited on
Commit
d5f1e20
Β·
1 Parent(s): 8cab7c7

Add Hugging Face Spaces deployment configuration

Browse files

- Dockerfile: Multi-stage Docker build for HF Spaces
- README.md: Space metadata and feature description
- DEPLOYMENT.md: Complete guide for local and cloud deployment
- Updated .streamlit/config.toml for production

Ready to deploy as public demo on Hugging Face Spaces

Files changed (4) hide show
  1. .streamlit/config.toml +7 -7
  2. DEPLOYMENT.md +176 -0
  3. Dockerfile +40 -0
  4. README.md +59 -0
.streamlit/config.toml CHANGED
@@ -1,8 +1,8 @@
1
- [theme]
2
- base="light"
3
- primaryColor="#3B82F6"
4
- backgroundColor="#FAFAFA"
5
- secondaryBackgroundColor="#F3F4F6"
6
- textColor="#1F2937"
7
- font="sans serif"
8
 
 
 
 
1
+ [server]
2
+ headless = true
3
+ port = 8501
4
+ enableCORS = false
5
+ enableXsrfProtection = false
 
 
6
 
7
+ [browser]
8
+ gatherUsageStats = false
DEPLOYMENT.md ADDED
@@ -0,0 +1,176 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # FocusFlow Deployment Guide
2
+
3
+ ## Local Deployment (Offline Mode - Default)
4
+
5
+ **Best for**: Personal study, maximum privacy, offline access
6
+
7
+ ### Prerequisites
8
+ - Python 3.10+
9
+ - Ollama installed
10
+ - 4GB+ RAM
11
+
12
+ ### Setup
13
+
14
+ ```bash
15
+ # Clone repository
16
+ git clone https://github.com/thesivarohith/hack.git
17
+ cd hack
18
+
19
+ # Create virtual environment
20
+ python3 -m venv venv
21
+ source venv/bin/activate # On Windows: venv\Scripts\activate
22
+
23
+ # Install dependencies
24
+ pip install -r requirements.txt
25
+
26
+ # Install Ollama models
27
+ ollama pull llama3.2:1b
28
+ ollama pull nomic-embed-text
29
+ ```
30
+
31
+ ### Run
32
+
33
+ ```bash
34
+ # Terminal 1 - Start backend
35
+ uvicorn backend.main:app --host 0.0.0.0 --port 8000 --reload
36
+
37
+ # Terminal 2 - Start frontend
38
+ streamlit run app.py
39
+ ```
40
+
41
+ Access at: `http://localhost:8501`
42
+
43
+ ---
44
+
45
+ ## Hugging Face Spaces Deployment (Cloud Demo)
46
+
47
+ **Best for**: Sharing publicly, showcasing
48
+
49
+ ### Prerequisites
50
+ - Hugging Face account (free)
51
+ - Hugging Face API token
52
+
53
+ ### Step 1: Get HF API Token
54
+
55
+ 1. Go to [huggingface.co/settings/tokens](https://huggingface.co/settings/tokens)
56
+ 2. Click "New token"
57
+ 3. Name: `focusflow`
58
+ 4. Type: **Read**
59
+ 5. Copy the token: `hf_xxxxx`
60
+
61
+ ### Step 2: Create Space
62
+
63
+ 1. Go to [huggingface.co](https://huggingface.co)
64
+ 2. Click profile β†’ **New Space**
65
+ 3. Settings:
66
+ - **Name**: `focusflow`
67
+ - **License**: MIT
68
+ - **SDK**: Docker
69
+ - **Hardware**: CPU basic (free)
70
+ 4. Click **Create Space**
71
+
72
+ ### Step 3: Configure Environment
73
+
74
+ 1. In your space, go to **Settings** β†’ **Variables**
75
+ 2. Add secret:
76
+ - Key: `HUGGINGFACE_API_TOKEN`
77
+ - Value: `hf_xxxxx` (your token)
78
+
79
+ ### Step 4: Push Code
80
+
81
+ ```bash
82
+ # Clone your space
83
+ git clone https://huggingface.co/spaces/YOUR_USERNAME/focusflow
84
+ cd focusflow
85
+
86
+ # Copy files from this repo
87
+ cp -r /path/to/hack/* .
88
+
89
+ # Add and commit
90
+ git add .
91
+ git commit -m "Initial deployment"
92
+ git push
93
+ ```
94
+
95
+ ### Step 5: Wait for Build
96
+
97
+ - Build takes ~10-15 minutes
98
+ - Watch logs in the Space
99
+ - Your app will be live at: `https://huggingface.co/spaces/YOUR_USERNAME/focusflow`
100
+
101
+ ---
102
+
103
+ ## Comparison
104
+
105
+ | Feature | Local (Ollama) | Cloud (HF Spaces) |
106
+ |---------|----------------|-------------------|
107
+ | **Internet** | ❌ Not required | βœ… Required |
108
+ | **Model** | llama3.2:1b (1B params) | Llama-3-8B (8B params) |
109
+ | **Speed** | ⚑ Very fast | 🐒 Slower (CPU) |
110
+ | **Privacy** | πŸ”’ 100% private | 🌐 Public demo |
111
+ | **Cost** | πŸ’° Free | πŸ’° Free |
112
+ | **Best For** | Daily studying | Sharing/demos |
113
+
114
+ ---
115
+
116
+ ## Switching Modes Locally
117
+
118
+ ### Test Cloud Mode Locally
119
+
120
+ ```bash
121
+ # Set environment variables
122
+ export LLM_PROVIDER=huggingface
123
+ export HUGGINGFACE_API_TOKEN=hf_xxxxx
124
+
125
+ # Run normally
126
+ streamlit run app.py
127
+ ```
128
+
129
+ ### Back to Local Mode
130
+
131
+ ```bash
132
+ # Unset or just restart terminal
133
+ unset LLM_PROVIDER
134
+ streamlit run app.py
135
+ ```
136
+
137
+ ---
138
+
139
+ ## Troubleshooting
140
+
141
+ ### Local Mode Issues
142
+
143
+ **Ollama not found:**
144
+ ```bash
145
+ # Install Ollama
146
+ curl -fsSL https://ollama.com/install.sh | sh
147
+ ollama pull llama3.2:1b
148
+ ollama pull nomic-embed-text
149
+ ```
150
+
151
+ **Port already in use:**
152
+ ```bash
153
+ # Find and kill process on port 8501
154
+ lsof -ti:8501 | xargs kill -9
155
+ ```
156
+
157
+ ### Cloud Mode Issues
158
+
159
+ **API token error:**
160
+ - Make sure `HUGGINGFACE_API_TOKEN` is set in HF Space variables
161
+ - Token must have at least **Read** permission
162
+
163
+ **Slow responses:**
164
+ - This is normal on free CPU tier
165
+ - Responses take 10-30 seconds (vs instant on local)
166
+
167
+ ---
168
+
169
+ ## Support
170
+
171
+ - **GitHub Issues**: [github.com/thesivarohith/hack/issues](https://github.com/thesivarohith/hack/issues)
172
+ - **Documentation**: See [TECHNICAL_DOCUMENTATION.md](./TECHNICAL_DOCUMENTATION.md)
173
+
174
+ ---
175
+
176
+ **Recommended**: Use local deployment for daily studying, cloud deployment only for demos/sharing.
Dockerfile ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.10-slim
2
+
3
+ WORKDIR /app
4
+
5
+ # Install system dependencies
6
+ RUN apt-get update && apt-get install -y \
7
+ build-essential \
8
+ curl \
9
+ && rm -rf /var/lib/apt/lists/*
10
+
11
+ # Copy requirements and install Python dependencies
12
+ COPY requirements.txt .
13
+ RUN pip install --no-cache-dir -r requirements.txt
14
+
15
+ # Copy application code
16
+ COPY . .
17
+
18
+ # Create data directories
19
+ RUN mkdir -p data chroma_db
20
+
21
+ # Expose ports for backend (8000) and frontend (8501)
22
+ EXPOSE 8501 8000
23
+
24
+ # Set environment to use Hugging Face
25
+ ENV LLM_PROVIDER=huggingface
26
+
27
+ # Create startup script
28
+ RUN echo '#!/bin/bash\n\
29
+ # Start FastAPI backend in background\n\
30
+ uvicorn backend.main:app --host 0.0.0.0 --port 8000 &\n\
31
+ \n\
32
+ # Wait for backend to start\n\
33
+ sleep 2\n\
34
+ \n\
35
+ # Start Streamlit frontend\n\
36
+ streamlit run app.py --server.port 8501 --server.address 0.0.0.0 --server.headless true\n\
37
+ ' > /app/start.sh && chmod +x /app/start.sh
38
+
39
+ # Run startup script
40
+ CMD ["/app/start.sh"]
README.md ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: FocusFlow
3
+ emoji: πŸ“š
4
+ colorFrom: blue
5
+ colorTo: purple
6
+ sdk: docker
7
+ app_port: 8501
8
+ pinned: false
9
+ ---
10
+
11
+ # πŸ“š FocusFlow - AI Study Companion
12
+
13
+ An intelligent study assistant powered by AI that transforms your learning materials into personalized, adaptive study experiences.
14
+
15
+ ## ✨ Features
16
+
17
+ - **πŸ“– Multi-Subject Study Planning**: Upload PDFs and get automated multi-day study plans
18
+ - **πŸ’¬ RAG-Powered Q&A**: Ask questions and get answers with source citations
19
+ - **πŸ“ Adaptive Quizzes**: Context-based quizzes that adapt to your performance
20
+ - **πŸ“Š Progress Tracking**: Track mastery levels and quiz history
21
+ - **🎯 Smart Day Progression**: Automatically unlocks new topics as you complete them
22
+ - **πŸ” Source Citations**: Every answer cites the exact source and page number
23
+
24
+ ## πŸ€– Models Used
25
+
26
+ - **LLM**: Meta-Llama-3-8B-Instruct (via Hugging Face Inference API)
27
+ - **Embeddings**: nomic-embed-text (for semantic search)
28
+
29
+ ## πŸš€ How to Use
30
+
31
+ 1. **Upload PDFs**: Add your study materials in the Sources panel
32
+ 2. **Generate Plan**: Ask the Calendar to create a study plan (e.g., "Make a 5-day plan")
33
+ 3. **Study**: Click on topics to view lessons and ask questions
34
+ 4. **Take Quizzes**: Test your knowledge and unlock new topics
35
+
36
+ ## πŸ“ Demo Note
37
+
38
+ This is a **cloud demo version** running on Hugging Face Spaces using the Llama-3-8B model.
39
+
40
+ **For offline/local use** with enhanced privacy and llama3.2:1b (no internet required):
41
+ - πŸ“‚ [GitHub Repository](https://github.com/thesivarohith/hack)
42
+ - πŸ“– [Local Setup Guide](https://github.com/thesivarohith/hack/blob/main/RUN_GUIDE.md)
43
+
44
+ The local version works completely offline and keeps all your data private on your machine.
45
+
46
+ ## πŸ› οΈ Tech Stack
47
+
48
+ - **Frontend**: Streamlit
49
+ - **Backend**: FastAPI + LangChain
50
+ - **Vector DB**: ChromaDB
51
+ - **LLM**: Hugging Face Inference API
52
+
53
+ ## πŸ“„ License
54
+
55
+ MIT License - See [LICENSE](https://github.com/thesivarohith/hack/blob/main/LICENSE) for details.
56
+
57
+ ---
58
+
59
+ Built with ❀️ for better learning experiences