Ryanfafa commited on
Commit
9efdf3f
·
verified ·
1 Parent(s): c3194eb

Update rag_engine.py

Browse files
Files changed (1) hide show
  1. rag_engine.py +42 -587
rag_engine.py CHANGED
@@ -1,341 +1,9 @@
1
-
2
- I want to work as an AI engineer but I need to implement and deploy some practical projects to show my skills on my online website portfolio and add live demo. What are the projects you suggust me to have ?
3
-
4
- 01:15
5
-
6
- Reading frontend design skill for creating a good portfolio project showcase
7
- Great question! Let me give you a well-thought-out roadmap of projects that will genuinely impress hiring managers and demonstrate real AI engineering skills — not just API wrappers.
8
-
9
- Here's what I recommend, organized by the core skills they showcase:
10
-
11
- 1. RAG-Powered Document Q&A System Build a system where users upload PDFs/docs and chat with them. This shows you understand embeddings, vector databases (Pinecone, Chroma, Qdrant), chunking strategies, retrieval pipelines, and prompt engineering. Use LangChain or LlamaIndex + a frontend. This is probably the #1 project recruiters look for.
12
-
13
- 2. Fine-Tuned Model for a Niche Task Pick a specific domain (legal clause classification, medical note summarization, code review) and fine-tune an open-source model (Mistral, LLaMA, Phi) using LoRA/QLoRA on Hugging Face. Deploy it with a Gradio or FastAPI demo. This shows you can go beyond prompting and actually train/adapt models.
14
-
15
- 3. Multi-Agent System Build a team of agents that collaborates on a complex task — e.g., a research agent that searches the web, a writer agent that drafts content, and an editor agent that refines it. Use LangGraph, CrewAI, or AutoGen. This signals you understand agentic architectures, tool use, and orchestration.
16
-
17
- 4. Real-Time AI Voice/Chat Assistant with Memory Build a chatbot with persistent memory across sessions (using something like Mem0 or a simple vector store). Add a voice interface with Whisper for STT and ElevenLabs or TTS for speech output. This demonstrates multimodal handling and stateful AI systems.
18
-
19
- 5. MLOps Pipeline with Model Monitoring Deploy a model with a full pipeline: training → evaluation → versioning (MLflow) → containerization (Docker) → deployment (FastAPI on AWS/GCP/Render) → drift monitoring. This is critical because many "AI engineers" skip this and it's a massive differentiator.
20
-
21
- 6. AI-Powered Data Analysis Agent Build a natural language interface to a SQL database or CSV files. The user asks questions in plain English and gets charts + insights back. Use a Code Interpreter-style pattern. Tools: LangChain's SQL agent, Pandas, Plotly, Streamlit.
22
-
23
- 7. Semantic Search Engine Build a search engine for a dataset (news articles, recipes, job listings) that uses embedding-based similarity instead of keyword search. Show a live comparison between traditional keyword search and your semantic approach. Demonstrates understanding of embeddings and information retrieval.
24
-
25
- A few strategic tips:
26
-
27
- Each project should have a clean README on GitHub, a live demo link (Hugging Face Spaces, Vercel, Render, or Streamlit Cloud are all free), and a short write-up explaining your architectural decisions and trade-offs — not just what you built, but why you made certain choices. That's what separates a strong portfolio from a list of side projects.
28
-
29
- For your portfolio website itself, lead with the live demos rather than code screenshots. Recruiters spend 30 seconds on a portfolio — make the demos one-click accessible.
30
-
31
- Would you like me to help you build out the live demo page for your portfolio, or dive deeper into how to implement any of these specific projects?
32
-
33
- now lets make this project:
34
- 1. RAG-Powered Document Q&A System Build a system where users upload PDFs/docs and chat with them. This shows you understand embeddings, vector databases (Pinecone, Chroma, Qdrant), chunking strategies, retrieval pipelines, and prompt engineering. Use LangChain or LlamaIndex + a frontend. if it requires data , it should download it automatically and train it self. I just want to push your code to huggingface and run it live
35
-
36
- 01:18
37
- Let me ask a few quick questions before we start building so the project is perfectly tailored for you!
38
-
39
- Q: Which vector database do you want to use?
40
- A: ChromaDB (local, no setup)
41
-
42
- Q: Which LLM backend do you want to power the Q&A?
43
- A: HuggingFace open-source model (free, no API key)
44
-
45
- Q: What kind of UI do you want?
46
- A: Streamlit (clean & popular)
47
-
48
- 01:18
49
- Perfect choices! 100% free, no API keys needed, and deploys beautifully on HuggingFace Spaces. Let me build the full project now.
50
-
51
-
52
- Check available skills and setup workspace
53
- Now let me build the complete project — all files:
54
-
55
-
56
- Created 7 files
57
-
58
- Main Streamlit app - the RAG Q&A system
59
- app.py
60
-
61
- RAG Engine - the core logic: embeddings, ChromaDB, retrieval, LLM generation
62
- rag_engine.py
63
-
64
- Auto data downloader - downloads a sample AI report PDF for demo purposes
65
- data_downloader.py
66
-
67
- requirements.txt - all dependencies for the project
68
- requirements.txt
69
-
70
- HuggingFace Spaces README / configuration file
71
- README.md
72
-
73
- packages.txt for HuggingFace Spaces system dependencies
74
- packages.txt
75
-
76
- .gitignore to keep the repo clean
77
- .gitignore
78
- Done
79
- Now let's verify all files look correct:
80
-
81
-
82
- Verify all project files exist with correct line counts
83
- Now copy everything to outputs:
84
-
85
-
86
- Ran a command, read a file
87
- All 6 files are ready. Here's everything you need to deploy:
88
-
89
- 📁 What was built (980 lines total)
90
- File Purpose
91
- app.py Streamlit UI — dark themed, chat interface, stats, sidebar
92
- rag_engine.py Core RAG pipeline: chunking → embeddings → ChromaDB → MMR retrieval → Zephyr-7B
93
- data_downloader.py Auto-downloads arxiv PDFs; generates a rich AI overview doc as fallback
94
- requirements.txt All Python dependencies
95
- packages.txt System libs for HuggingFace Spaces (poppler for PDF)
96
- README.md Architecture diagram + setup instructions
97
- 🚀 Deploy to HuggingFace Spaces (3 steps)
98
- Step 1 — Create the Space:
99
-
100
- Go to huggingface.co/new-space
101
- Name it docmind-ai, choose Streamlit, set to Public
102
- Click "Create Space"
103
- Step 2 — Push the files:
104
-
105
- bash
106
- git clone https://huggingface.co/spaces/YOUR_USERNAME/docmind-ai
107
- cd docmind-ai
108
-
109
- # Copy all 6 files into this folder, then:
110
- git add .
111
- git commit -m "🧠 DocMind AI - RAG Q&A System"
112
- git push
113
- Step 3 — (Optional) Add HF Token for higher rate limits: In your Space → Settings → Secrets → add HF_TOKEN = your HuggingFace token (free at huggingface.co/settings/tokens)
114
-
115
- HuggingFace will auto-install everything from requirements.txt and launch the app. Takes ~3 minutes on first build.
116
-
117
- App
118
- PY
119
- Rag engine
120
- PY
121
- Data downloader
122
- PY
123
- Requirements
124
- TXT
125
- Readme
126
- Document · MD
127
- Packages
128
- TXT
129
- 1771338283280_image.png
130
- no option for streamlit. provide step by step guideline
131
-
132
- 01:25
133
- I can see the issue — HuggingFace changed their UI and now shows Gradio, Docker, and Static only. Streamlit is hidden inside Docker. Here's exactly what to do:
134
-
135
- 🚀 Step-by-Step: Deploy DocMind AI on HuggingFace Spaces
136
- Step 1 — Choose Docker (not Gradio)
137
- On the screen you're seeing, click Docker (the middle option with 17 templates)
138
-
139
- Step 2 — Select "Blank" template
140
- After clicking Docker, it will show Docker templates — select Blank
141
-
142
- Step 3 — Fill the settings
143
- Owner: Ryanfafa ✅ (already set)
144
- Space name: docmind-ai ✅ (already set)
145
- License: mit ✅ (already set)
146
- Hardware: CPU Basic (Free) ✅
147
- Then click "Create Space"
148
-
149
- Step 4 — We need a Dockerfile
150
- Since we're using Docker mode, we need one extra file. Here it is:
151
-
152
-
153
- Dockerfile for HuggingFace Spaces Docker deployment with Streamlit
154
- Also update the README header to reflect Docker SDK:
155
-
156
-
157
- Edited a file, read a file
158
- Step 5 — Clone your new Space locally
159
- After creating the space, open your terminal and run:
160
-
161
- bash
162
- git clone https://huggingface.co/spaces/Ryanfafa/docmind-ai
163
- cd docmind-ai
164
- ```
165
-
166
- ---
167
-
168
- ### Step 6 — Copy all project files into the folder
169
-
170
- You need these **7 files** total inside the `docmind-ai` folder:
171
- ```
172
- docmind-ai/
173
- ├── Dockerfile ← NEW (just downloaded above)
174
- ├── app.py
175
- ├── rag_engine.py
176
- ├── data_downloader.py
177
- ├── requirements.txt
178
- ├── packages.txt
179
- └── README.md ← updated above
180
- Step 7 — Push to HuggingFace
181
- bash
182
- git add .
183
- git commit -m "🧠 DocMind AI - RAG Q&A with Streamlit + Docker"
184
- git push
185
- 💡 If git push asks for credentials: username = your HF username, password = your HF access token (get it at huggingface.co/settings/tokens → New token → Write access)
186
-
187
- Step 8 — Watch it build
188
- Go to your Space URL: https://huggingface.co/spaces/Ryanfafa/docmind-ai
189
-
190
- Click the "Logs" tab to watch the Docker build. First build takes 4–6 minutes. Once it says "Running" in green — your app is live! 🎉
191
-
192
- ⚠️ One known gotcha: If the build fails with a memory error, go to Space Settings → Factory reboot to try again. The free CPU Basic tier occasionally has cold-start issues.
193
-
194
- Dockerfile
195
-
196
- Readme
197
- Document · MD
198
- how to open terminal? I have free version
199
-
200
- 01:27
201
- You don't need to install anything! Here are your options based on what you already have:
202
-
203
- Option 1 — Use HuggingFace's Web UI (Upload files directly, no terminal needed) ✅ EASIEST
204
- Go to your Space: huggingface.co/spaces/Ryanfafa/docmind-ai
205
- Click the "Files" tab
206
- Click "Add file" → "Upload files"
207
- Drag and drop all 7 files at once
208
- Click "Commit changes" — done! Build starts automatically
209
- Option 2 — Use VS Code in your browser (free, no install)
210
- Go to github.dev or vscode.dev — free browser VS Code
211
- Or go to any GitHub repo and press the . key to open it in browser VS Code
212
- Option 3 — Windows: Use the built-in terminal (no install needed)
213
- Press Windows key + R → type cmd → press Enter
214
-
215
- That opens Command Prompt. Then run:
216
-
217
- winget install Git.Git
218
- After Git installs, restart Command Prompt and you're ready.
219
-
220
- Option 4 — Mac: Terminal is already installed
221
- Press Cmd + Space → type Terminal → press Enter. It's built into every Mac.
222
-
223
- Which operating system are you on? (Windows / Mac / Linux) — I'll give you the exact commands for your machine, copy-paste ready.
224
-
225
- YAML Metadata Error: "short_description" length must be less than or equal to 60 characters long
226
-
227
- 01:30
228
- Easy fix! The description is just too long. Here's the corrected README.md:
229
-
230
-
231
- Edited 2 files, read a file
232
- Download the updated README.md and re-upload it to HuggingFace, replacing the old one. The fix is just this line changed to:
233
-
234
- short_description: Chat with any PDF using RAG + ChromaDB
235
- That's 50 characters — safely under the 60 limit. ✅
236
-
237
- Readme
238
- Document · MD
239
- uploaded, what next?
240
-
241
- 01:31
242
- Now wait for the build to finish! Here's what to do:
243
-
244
- 1 — Watch the build logs
245
- Go to your Space URL:
246
-
247
- huggingface.co/spaces/Ryanfafa/docmind-ai
248
- Click the "Logs" tab at the top. You'll see Docker building in real time.
249
-
250
- Normal build time: 4–6 minutes on first run.
251
-
252
- 2 — What you'll see in logs
253
- The build goes through these stages in order:
254
-
255
- Step 1/10 — FROM python:3.10-slim → downloading base image
256
- Step 3/10 — pip install... → installing all packages (this is the slowest part)
257
- Step 7/10 — COPY . . → copying your files
258
- Successfully built xxxxxx → ✅ done!
259
- 3 — When it turns green
260
- Once the status badge says "Running" in green, click the "App" tab — your live DocMind AI will be there!
261
-
262
- Share a screenshot of your Logs tab if you see any red error — I'll fix it immediately. What does it show right now?
263
-
264
- DocMind AI
265
- ⚠️ Could not generate answer. Error: 410 Client Error: Gone for url: https://api-inference.huggingface.co/models/HuggingFaceH4/zephyr-7b-beta
266
- __https://api-inference.huggingface.co__ is no longer supported. Please use __https://router.huggingface.co__ instead.
267
-
268
- 01:42
269
- The HuggingFace API endpoint changed! Easy fix — we just need to update the rag_engine.py to use the new router URL and a currently available free model.
270
-
271
-
272
- Edited a file, viewed a file
273
- Good, I can see the full file. Let me rewrite it completely with the fix:
274
-
275
-
276
- Created a file, ran a command, read a file
277
- Download the updated rag_engine.py and replace the one in your HuggingFace Space.
278
-
279
- What changed (2 things only)
280
- Old broken code:
281
-
282
- python
283
- # Used deprecated endpoint + old client
284
- from huggingface_hub import InferenceClient
285
- client = InferenceClient(model="HuggingFaceH4/zephyr-7b-beta")
286
- client.text_generation(prompt, ...)
287
- New fixed code:
288
-
289
- python
290
- # Uses new router URL + standard requests + Mistral model
291
- import requests
292
- HF_API_URL = "https://router.huggingface.co/hf-inference/models/mistralai/Mistral-7B-Instruct-v0.3/v1/chat/completions"
293
- requests.post(HF_API_URL, headers=headers, json=payload)
294
- How to update on HuggingFace
295
- Go to your Space → Files tab
296
- Click on rag_engine.py
297
- Click the pencil ✏️ edit icon (top right of the file)
298
- Select all the text and paste the new file contents
299
- Click "Commit changes"
300
- The Space will auto-rebuild in ~2 minutes and the LLM will work! ✅
301
-
302
- 💡 Pro tip: Add a free HuggingFace token as a Secret (HF_TOKEN) in your Space Settings for higher rate limits and faster responses.
303
-
304
- Rag engine
305
- PY
306
- ⚠️ Could not generate answer. Error: 401 Client Error: Unauthorized for url: https://router.huggingface.co/hf-inference/models/mistralai/Mistral-7B-Instruct-v0.3/v1/chat/completions
307
-
308
- 01:47
309
- The new HuggingFace router requires a token even for free models. Two options — let's do both: add the token AND add a guaranteed-free fallback model.
310
-
311
- Step 1 — Get your free HuggingFace token (2 minutes)
312
- Go to huggingface.co/settings/tokens
313
- Click "New token"
314
- Name it docmind, Role: Read
315
- Click "Create" → Copy the token (starts with hf_...)
316
- Step 2 — Add it as a Secret in your Space
317
- Go to your Space → Settings tab
318
- Scroll down to "Variables and secrets"
319
- Click "New secret"
320
- Name: HF_TOKEN → Value: paste your hf_... token
321
- Click Save — Space restarts automatically ✅
322
- Step 3 — Also update rag_engine.py with a smarter fallback
323
- While you're doing Step 1 & 2, download this updated file that tries multiple free models in order if one fails:
324
-
325
-
326
- Ran a command, read a file
327
-
328
- Write updated rag_engine.py with multi-model fallback and clear token instructions
329
- bash
330
-
331
- cat > /mnt/user-data/outputs/rag-document-qa/rag_engine.py << 'ENDOFFILE'
332
  """
333
  RAG Engine
334
- ──────────
335
- - Embeddings : sentence-transformers/all-MiniLM-L6-v2 (HuggingFace, free)
336
- - Vector DB : ChromaDB (local, in-memory / persistent)
337
- - LLM : HuggingFace Router API (tries multiple free models)
338
- - Chunking : Recursive character splitter with overlap
339
  """
340
 
341
  import os
@@ -351,7 +19,7 @@ from langchain_community.embeddings import HuggingFaceEmbeddings
351
  from langchain_community.vectorstores import Chroma
352
  from langchain_community.document_loaders import PyPDFLoader, TextLoader
353
 
354
- # ─── Configuration ─────────────────────────────────────────────────────────────
355
  EMBED_MODEL = "sentence-transformers/all-MiniLM-L6-v2"
356
  CHUNK_SIZE = 800
357
  CHUNK_OVERLAP = 150
@@ -359,7 +27,7 @@ TOP_K = 4
359
  COLLECTION_NAME = "docmind_collection"
360
  CHROMA_DIR = "./chroma_db"
361
 
362
- # Free models to try in order (all on HF router)
363
  CANDIDATE_MODELS = [
364
  "mistralai/Mistral-7B-Instruct-v0.3",
365
  "microsoft/Phi-3.5-mini-instruct",
@@ -369,7 +37,7 @@ CANDIDATE_MODELS = [
369
 
370
 
371
  class RAGEngine:
372
- """Full RAG pipeline: ingest embed store retrieve generate."""
373
 
374
  def __init__(self):
375
  self._embeddings = None
@@ -380,7 +48,6 @@ class RAGEngine:
380
  separators=["\n\n", "\n", ". ", " ", ""],
381
  )
382
 
383
- # ── Lazy-load embeddings ───────────────────────────────────────────────────
384
  @property
385
  def embeddings(self):
386
  if self._embeddings is None:
@@ -391,17 +58,15 @@ class RAGEngine:
391
  )
392
  return self._embeddings
393
 
394
- # ── Ingest uploaded Streamlit file ────────────────────────────────────────
395
  def ingest_file(self, uploaded_file) -> int:
396
- suffix = _get_suffix(uploaded_file.name)
397
  with tempfile.NamedTemporaryFile(delete=False, suffix=suffix) as tmp:
398
  tmp.write(uploaded_file.read())
399
  tmp_path = tmp.name
400
  return self.ingest_path(tmp_path, uploaded_file.name)
401
 
402
- # ── Ingest from file path ─────────────────────────────────────────────────
403
  def ingest_path(self, path: str, name: str = "") -> int:
404
- suffix = _get_suffix(name or path)
405
  loader = PyPDFLoader(path) if suffix == ".pdf" else TextLoader(path, encoding="utf-8")
406
  raw_docs = loader.load()
407
 
@@ -419,252 +84,37 @@ class RAGEngine:
419
  )
420
  return len(chunks)
421
 
422
- # ── Query ─────────────────────────────────────────────────────────────────
423
  def query(self, question: str) -> Tuple[str, List[str]]:
424
  if self._vectorstore is None:
425
- return "⚠️ Please upload a document first.", []
426
 
427
  retriever = self._vectorstore.as_retriever(
428
  search_type="mmr",
429
  search_kwargs={"k": TOP_K, "fetch_k": TOP_K * 3},
430
  )
431
- docs = retriever.invoke(question)
432
- context = "\n\n---\n\n".join(f"[Chunk {i+1}]\n{d.page_content}" for i, d in enumerate(docs))
433
- sources = list({d.metadata.get("source", "Document") for d in docs})
434
- answer = self._generate(question, context)
435
- return answer, sources
436
-
437
- # ── LLM: try each model until one works ───────────────────────────────────
438
- def _generate(self, question: str, context: str) -> str:
439
- hf_token = os.environ.get("HF_TOKEN", "")
440
-
441
- if not hf_token:
442
- return (
443
- "⚠️ **HF_TOKEN not set.**\n\n"
444
- "To enable AI answers:\n"
445
- "1. Get a free token at huggingface.co/settings/tokens\n"
446
- "2. Add it as a **Secret** named `HF_TOKEN` in your Space Settings\n\n"
447
- f"**Most relevant excerpt from your document:**\n\n{_extract_best(question, context)}"
448
- )
449
-
450
- headers = {
451
- "Content-Type": "application/json",
452
- "Authorization": f"Bearer {hf_token}",
453
- }
454
- messages = [
455
- {
456
- "role": "system",
457
- "content": (
458
- "You are DocMind, an expert document analyst. "
459
- "Answer using ONLY the provided document context. "
460
- "Be concise and cite specific details. "
461
- "If the answer is not in the context, say so clearly."
462
- ),
463
- },
464
- {
465
- "role": "user",
466
- "content": f"Document context:\n{context}\n\nQuestion: {question}",
467
- },
468
- ]
469
-
470
- last_error = ""
471
- for model_id in CANDIDATE_MODELS:
472
- try:
473
- url = f"https://router.huggingface.co/hf-inference/models/{model_id}/v1/chat/completions"
474
- resp = requests.post(
475
- url,
476
- headers=headers,
477
- json={"model": model_id, "messages": messages, "max_tokens": 512, "temperature": 0.2},
478
- timeout=60,
479
- )
480
- if resp.status_code == 200:
481
- answer = resp.json()["choices"][0]["message"]["content"].strip()
482
- if answer:
483
- return answer
484
- else:
485
- last_error = f"{resp.status_code}: {resp.text[:200]}"
486
- except Exception as e:
487
- last_error = str(e)
488
- continue
489
-
490
- # All models failed → extractive fallback
491
- excerpt = _extract_best(question, context)
492
- return (
493
- f"*(All LLM models unavailable – showing most relevant excerpt)*\n\n"
494
- f"{excerpt}\n\n"
495
- f"<small>Last error: {last_error}</small>"
496
- )
497
-
498
-
499
- # ─── Helpers ──────────────────────────────────────────────────────────────────
500
- def _extract_best(question: str, context: str) -> str:
501
- """Keyword-based extractive fallback."""
502
- keywords = set(re.findall(r'\b\w{4,}\b', question.lower()))
503
- best_chunk = ""
504
- best_score = 0
505
- for chunk in context.split("---"):
506
- words = set(re.findall(r'\b\w{4,}\b', chunk.lower()))
507
- score = len(keywords & words)
508
- if score > best_score:
509
- best_score = score
510
- best_chunk = chunk.strip()
511
- excerpt = best_chunk[:600] + ("..." if len(best_chunk) > 600 else "")
512
- return excerpt or "No relevant content found."
513
-
514
-
515
- def _get_suffix(name: str) -> str:
516
- return os.path.splitext(name)[-1].lower() or ".txt"
517
- ENDOFFILE
518
- echo "Done"
519
- Output
520
-
521
- Done
522
-
523
- Presented file
524
- Done
525
- Summary of what to do
526
- Do Step 1 & 2 first (get token + add secret) — that's the real fix.
527
-
528
- Then replace rag_engine.py with the new file above. The new version:
529
-
530
- Shows a clear friendly message if token is missing (instead of a cryptic error)
531
- Tries 4 different free models in order until one responds
532
- Falls back to extractive answer (pulls the most relevant paragraph from your doc) even if all LLMs fail
533
- After you add HF_TOKEN as a secret in Space Settings, the Space restarts automatically — no rebuild needed. Just ask a question again and it will work ✅
534
-
535
- Rag engine
536
- PY
537
-
538
-
539
-
540
-
541
-
542
-
543
- Claude is AI and can make mistakes. Please double-check responses.
544
- Rag engine · PY
545
- Copy
546
-
547
- """
548
- RAG Engine
549
- ──────────
550
- - Embeddings : sentence-transformers/all-MiniLM-L6-v2 (HuggingFace, free)
551
- - Vector DB : ChromaDB (local, in-memory / persistent)
552
- - LLM : HuggingFace Router API (tries multiple free models)
553
- - Chunking : Recursive character splitter with overlap
554
- """
555
-
556
- import os
557
- import re
558
- import requests
559
- import tempfile
560
- from typing import Tuple, List
561
-
562
- import chromadb
563
- from chromadb.config import Settings
564
- from langchain.text_splitter import RecursiveCharacterTextSplitter
565
- from langchain_community.embeddings import HuggingFaceEmbeddings
566
- from langchain_community.vectorstores import Chroma
567
- from langchain_community.document_loaders import PyPDFLoader, TextLoader
568
-
569
- # ─── Configuration ─────────────────────────────────────────────────────────────
570
- EMBED_MODEL = "sentence-transformers/all-MiniLM-L6-v2"
571
- CHUNK_SIZE = 800
572
- CHUNK_OVERLAP = 150
573
- TOP_K = 4
574
- COLLECTION_NAME = "docmind_collection"
575
- CHROMA_DIR = "./chroma_db"
576
-
577
- # Free models to try in order (all on HF router)
578
- CANDIDATE_MODELS = [
579
- "mistralai/Mistral-7B-Instruct-v0.3",
580
- "microsoft/Phi-3.5-mini-instruct",
581
- "google/gemma-2-2b-it",
582
- "HuggingFaceH4/zephyr-7b-beta",
583
- ]
584
-
585
-
586
- class RAGEngine:
587
- """Full RAG pipeline: ingest → embed → store → retrieve → generate."""
588
-
589
- def __init__(self):
590
- self._embeddings = None
591
- self._vectorstore = None
592
- self._splitter = RecursiveCharacterTextSplitter(
593
- chunk_size=CHUNK_SIZE,
594
- chunk_overlap=CHUNK_OVERLAP,
595
- separators=["\n\n", "\n", ". ", " ", ""],
596
  )
597
-
598
- # ── Lazy-load embeddings ───────────────────────────────────────────────────
599
- @property
600
- def embeddings(self):
601
- if self._embeddings is None:
602
- self._embeddings = HuggingFaceEmbeddings(
603
- model_name=EMBED_MODEL,
604
- model_kwargs={"device": "cpu"},
605
- encode_kwargs={"normalize_embeddings": True},
606
- )
607
- return self._embeddings
608
-
609
- # ── Ingest uploaded Streamlit file ────────────────────────────────────────
610
- def ingest_file(self, uploaded_file) -> int:
611
- suffix = _get_suffix(uploaded_file.name)
612
- with tempfile.NamedTemporaryFile(delete=False, suffix=suffix) as tmp:
613
- tmp.write(uploaded_file.read())
614
- tmp_path = tmp.name
615
- return self.ingest_path(tmp_path, uploaded_file.name)
616
-
617
- # ── Ingest from file path ─────────────────────────────────────────────────
618
- def ingest_path(self, path: str, name: str = "") -> int:
619
- suffix = _get_suffix(name or path)
620
- loader = PyPDFLoader(path) if suffix == ".pdf" else TextLoader(path, encoding="utf-8")
621
- raw_docs = loader.load()
622
-
623
- for doc in raw_docs:
624
- doc.metadata["source"] = name or os.path.basename(path)
625
-
626
- chunks = self._splitter.split_documents(raw_docs)
627
-
628
- self._vectorstore = Chroma.from_documents(
629
- documents=chunks,
630
- embedding=self.embeddings,
631
- collection_name=COLLECTION_NAME,
632
- persist_directory=CHROMA_DIR,
633
- client_settings=Settings(anonymized_telemetry=False),
634
- )
635
- return len(chunks)
636
-
637
- # ── Query ─────────────────────────────────────────────────────────────────
638
- def query(self, question: str) -> Tuple[str, List[str]]:
639
- if self._vectorstore is None:
640
- return "⚠️ Please upload a document first.", []
641
-
642
- retriever = self._vectorstore.as_retriever(
643
- search_type="mmr",
644
- search_kwargs={"k": TOP_K, "fetch_k": TOP_K * 3},
645
- )
646
- docs = retriever.invoke(question)
647
- context = "\n\n---\n\n".join(f"[Chunk {i+1}]\n{d.page_content}" for i, d in enumerate(docs))
648
- sources = list({d.metadata.get("source", "Document") for d in docs})
649
- answer = self._generate(question, context)
650
  return answer, sources
651
 
652
- # ── LLM: try each model until one works ───────────────────────────────────
653
  def _generate(self, question: str, context: str) -> str:
654
  hf_token = os.environ.get("HF_TOKEN", "")
655
 
656
  if not hf_token:
 
657
  return (
658
- "⚠️ **HF_TOKEN not set.**\n\n"
659
- "To enable AI answers:\n"
660
  "1. Get a free token at huggingface.co/settings/tokens\n"
661
- "2. Add it as a **Secret** named `HF_TOKEN` in your Space Settings\n\n"
662
- f"**Most relevant excerpt from your document:**\n\n{_extract_best(question, context)}"
663
  )
664
 
665
  headers = {
666
  "Content-Type": "application/json",
667
- "Authorization": f"Bearer {hf_token}",
668
  }
669
  messages = [
670
  {
@@ -678,18 +128,27 @@ class RAGEngine:
678
  },
679
  {
680
  "role": "user",
681
- "content": f"Document context:\n{context}\n\nQuestion: {question}",
682
  },
683
  ]
684
 
685
  last_error = ""
686
  for model_id in CANDIDATE_MODELS:
687
  try:
688
- url = f"https://router.huggingface.co/hf-inference/models/{model_id}/v1/chat/completions"
 
 
 
 
689
  resp = requests.post(
690
  url,
691
  headers=headers,
692
- json={"model": model_id, "messages": messages, "max_tokens": 512, "temperature": 0.2},
 
 
 
 
 
693
  timeout=60,
694
  )
695
  if resp.status_code == 200:
@@ -697,26 +156,23 @@ class RAGEngine:
697
  if answer:
698
  return answer
699
  else:
700
- last_error = f"{resp.status_code}: {resp.text[:200]}"
701
  except Exception as e:
702
  last_error = str(e)
703
  continue
704
 
705
- # All models failed → extractive fallback
706
- excerpt = _extract_best(question, context)
707
  return (
708
- f"*(All LLM models unavailable showing most relevant excerpt)*\n\n"
709
- f"{excerpt}\n\n"
710
- f"<small>Last error: {last_error}</small>"
711
  )
712
 
713
 
714
- # ─── Helpers ──────────────────────────────────────────────────────────────────
715
- def _extract_best(question: str, context: str) -> str:
716
- """Keyword-based extractive fallback."""
717
- keywords = set(re.findall(r'\b\w{4,}\b', question.lower()))
718
- best_chunk = ""
719
- best_score = 0
720
  for chunk in context.split("---"):
721
  words = set(re.findall(r'\b\w{4,}\b', chunk.lower()))
722
  score = len(keywords & words)
@@ -727,6 +183,5 @@ def _extract_best(question: str, context: str) -> str:
727
  return excerpt or "No relevant content found."
728
 
729
 
730
- def _get_suffix(name: str) -> str:
731
  return os.path.splitext(name)[-1].lower() or ".txt"
732
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  """
2
  RAG Engine
3
+ Embeddings : sentence-transformers/all-MiniLM-L6-v2
4
+ Vector DB : ChromaDB (local, in-memory / persistent)
5
+ LLM : HuggingFace Router API (tries multiple free models)
6
+ Chunking : Recursive character splitter with overlap
 
7
  """
8
 
9
  import os
 
19
  from langchain_community.vectorstores import Chroma
20
  from langchain_community.document_loaders import PyPDFLoader, TextLoader
21
 
22
+ # Configuration
23
  EMBED_MODEL = "sentence-transformers/all-MiniLM-L6-v2"
24
  CHUNK_SIZE = 800
25
  CHUNK_OVERLAP = 150
 
27
  COLLECTION_NAME = "docmind_collection"
28
  CHROMA_DIR = "./chroma_db"
29
 
30
+ # Free models to try in order
31
  CANDIDATE_MODELS = [
32
  "mistralai/Mistral-7B-Instruct-v0.3",
33
  "microsoft/Phi-3.5-mini-instruct",
 
37
 
38
 
39
  class RAGEngine:
40
+ """Full RAG pipeline: ingest, embed, store, retrieve, generate."""
41
 
42
  def __init__(self):
43
  self._embeddings = None
 
48
  separators=["\n\n", "\n", ". ", " ", ""],
49
  )
50
 
 
51
  @property
52
  def embeddings(self):
53
  if self._embeddings is None:
 
58
  )
59
  return self._embeddings
60
 
 
61
  def ingest_file(self, uploaded_file) -> int:
62
+ suffix = get_suffix(uploaded_file.name)
63
  with tempfile.NamedTemporaryFile(delete=False, suffix=suffix) as tmp:
64
  tmp.write(uploaded_file.read())
65
  tmp_path = tmp.name
66
  return self.ingest_path(tmp_path, uploaded_file.name)
67
 
 
68
  def ingest_path(self, path: str, name: str = "") -> int:
69
+ suffix = get_suffix(name or path)
70
  loader = PyPDFLoader(path) if suffix == ".pdf" else TextLoader(path, encoding="utf-8")
71
  raw_docs = loader.load()
72
 
 
84
  )
85
  return len(chunks)
86
 
 
87
  def query(self, question: str) -> Tuple[str, List[str]]:
88
  if self._vectorstore is None:
89
+ return "Please upload a document first.", []
90
 
91
  retriever = self._vectorstore.as_retriever(
92
  search_type="mmr",
93
  search_kwargs={"k": TOP_K, "fetch_k": TOP_K * 3},
94
  )
95
+ docs = retriever.invoke(question)
96
+ context = "\n\n---\n\n".join(
97
+ "[Chunk {}]\n{}".format(i + 1, d.page_content) for i, d in enumerate(docs)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
98
  )
99
+ sources = list({d.metadata.get("source", "Document") for d in docs})
100
+ answer = self._generate(question, context)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
101
  return answer, sources
102
 
 
103
  def _generate(self, question: str, context: str) -> str:
104
  hf_token = os.environ.get("HF_TOKEN", "")
105
 
106
  if not hf_token:
107
+ excerpt = extract_best(question, context)
108
  return (
109
+ "HF_TOKEN not set. To enable AI answers:\n"
 
110
  "1. Get a free token at huggingface.co/settings/tokens\n"
111
+ "2. Add it as a Secret named HF_TOKEN in your Space Settings\n\n"
112
+ "Most relevant excerpt from your document:\n\n" + excerpt
113
  )
114
 
115
  headers = {
116
  "Content-Type": "application/json",
117
+ "Authorization": "Bearer " + hf_token,
118
  }
119
  messages = [
120
  {
 
128
  },
129
  {
130
  "role": "user",
131
+ "content": "Document context:\n" + context + "\n\nQuestion: " + question,
132
  },
133
  ]
134
 
135
  last_error = ""
136
  for model_id in CANDIDATE_MODELS:
137
  try:
138
+ url = (
139
+ "https://router.huggingface.co/hf-inference/models/"
140
+ + model_id
141
+ + "/v1/chat/completions"
142
+ )
143
  resp = requests.post(
144
  url,
145
  headers=headers,
146
+ json={
147
+ "model": model_id,
148
+ "messages": messages,
149
+ "max_tokens": 512,
150
+ "temperature": 0.2,
151
+ },
152
  timeout=60,
153
  )
154
  if resp.status_code == 200:
 
156
  if answer:
157
  return answer
158
  else:
159
+ last_error = str(resp.status_code) + ": " + resp.text[:200]
160
  except Exception as e:
161
  last_error = str(e)
162
  continue
163
 
164
+ excerpt = extract_best(question, context)
 
165
  return (
166
+ "LLM models unavailable - showing most relevant excerpt:\n\n"
167
+ + excerpt
168
+ + "\n\nLast error: " + last_error
169
  )
170
 
171
 
172
+ def extract_best(question: str, context: str) -> str:
173
+ keywords = set(re.findall(r'\b\w{4,}\b', question.lower()))
174
+ best_chunk = ""
175
+ best_score = 0
 
 
176
  for chunk in context.split("---"):
177
  words = set(re.findall(r'\b\w{4,}\b', chunk.lower()))
178
  score = len(keywords & words)
 
183
  return excerpt or "No relevant content found."
184
 
185
 
186
+ def get_suffix(name: str) -> str:
187
  return os.path.splitext(name)[-1].lower() or ".txt"