VatsalPatel18 commited on
Commit
10731f0
·
0 Parent(s):

Publish MedDiscover-HF app

Browse files
Files changed (5) hide show
  1. NOTE.md +15 -0
  2. PLAN.md +70 -0
  3. README.md +21 -0
  4. app.py +372 -0
  5. requirements.txt +9 -0
NOTE.md ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # MedDiscover-HF status (Nov 20)
2
+
3
+ - Added Gradio app scaffold (`app.py`) for Hugging Face Spaces (ZeroGPU):
4
+ - MedCPT embeddings (`ncbi/MedCPT-*`) + FAISS IP index stored in `/data`.
5
+ - PDF upload → text extraction → chunking (500/50) → embedding → index build/load.
6
+ - Generator dropdown using open-weight HF models: `openai/gpt-oss-20b`, `google/gemma-3-12b-it`, `deepseek-ai/DeepSeek-VL2-small`, `ibm-granite/granite-3.1-8b-instruct`, `ibm-granite/granite-docling-258M`.
7
+ - Streaming answers; prompt forces context-grounded responses.
8
+ - Uses `@spaces.GPU()` on heavy steps; caches/models under `/data/.cache` (`HF_HOME`).
9
+ - No reranker; straight FAISS retrieval.
10
+ - Dependencies captured in `requirements.txt`; Spaces front matter in `README.md`.
11
+ - Pushed to HF Space `VatsalPatel18/MedDisover-space` (commit f6d2d7d).
12
+ - To do next when resuming:
13
+ 1) Set `HF_TOKEN` secret in the Space if Granite/Docling needs auth.
14
+ 2) Let the Space rebuild/install; test with a light model first (granite docling/3.1-8b) on ZeroGPU.
15
+ 3) Adjust model IDs if your tested demos use different ones; tune `max_new_tokens` if OOM.
PLAN.md ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # MedDiscover-HF Rollout Plan (Hugging Face Spaces, ZeroGPU)
2
+
3
+ ## 1) Goals & Constraints
4
+ - Host MedDiscover as a Gradio Space using ZeroGPU; no external API models (OpenAI, etc.).
5
+ - Provide a model dropdown with OSS models tested in other Spaces: `openai/gpt-oss-20b`, `google/gemma-3-12b-it`, `deepseek-vl2-small`, `ibm-granite/granite-vision` family, `ibm-granite/granite-docling-258M`, plus room for additions.
6
+ - Keep existing pipeline: PDF ingest → chunking (MedCPT tokenizer/encoder) → FAISS retrieval → answer generation via selected OSS model.
7
+ - Must run in HF build/runtime: single `app.py` entry (or `src/app.py` with `app_file` set), `sdk: gradio`, dependencies via `requirements.txt`/`pyproject`. Persistent storage limited; assume `/data` volume for cached indices.
8
+
9
+ ## 2) Spaces-specific mechanics to carry over
10
+ - Use `@spaces.GPU()` (as in gpt-oss demo) to request ZeroGPU for heavy calls; pair with `device_map="auto"` and `torch_dtype="auto"/bfloat16` to fit managed GPUs.
11
+ - For IBM Granite Docling/Vision: models require `use_auth_token=True` and `trust_remote_code` in some cases; respect `gr.NO_RELOAD` guard to avoid reloading on hot-reload.
12
+ - Streaming: use `TextIteratorStreamer` threaded generation (seen in gpt-oss, gemma) to keep UI responsive.
13
+ - System/developer prompts: gpt-oss demo uses Harmony encoding/preprompt parsing; we can simplify to plain chat unless Harmony is desired. If kept, include `openai_harmony` dependency and message rendering.
14
+ - Media handling: gemma demo enforces image/video limits and uses `<image>` tag counting; docling/granite demos load sample assets and draw boxes—good references for multimodal support.
15
+ - README front matter (`title`, `sdk: gradio`, `sdk_version`, `app_file`) required for Spaces config; ZeroGPU Spaces accept the same.
16
+
17
+ ## 3) Architecture on HF
18
+ - Single `app.py` (or `src/app.py`) hosting:
19
+ - Model registry: map human-facing names → loader functions/config (pipeline/AutoModel, tokenizer/processor, dtype, device_map, chat template).
20
+ - Embedding and indexing: MedCPT article encoder (requires GPU). Build FAISS index into `/data/faiss_index.bin` with metadata `/data/doc_metadata.json`; reuse across sessions if present.
21
+ - Retrieval: embed query with the same encoder, FAISS search (IP for MedCPT), optional rerank (if cross-encoder feasible on available GPU; otherwise skip).
22
+ - Generation: streaming handler wrapping selected model; minimal prompt template: “Use only retrieved context; answer concisely.”
23
+ - Gradio UI: upload PDFs, process PDFs (chunk+index), dropdown for embedding model (MedCPT only here) and generator model (OSS list), sliders for k/max_tokens/temp/etc., chat box showing answer + context.
24
+ - Persistence: point caches/indices to `/data` (Space persistent storage). Handle cold start by checking disk before loading.
25
+
26
+ ## 4) Model integration plan (technical nuances)
27
+ - `openai/gpt-oss-20b`:
28
+ - Load via `pipeline("text-generation", trust_remote_code=True, device_map="auto", torch_dtype="auto")`.
29
+ - Optionally keep Harmony encoding & `@spaces.GPU()` wrapper for generation; streaming with `TextIteratorStreamer`.
30
+ - `google/gemma-3-12b-it`:
31
+ - `AutoProcessor.from_pretrained(..., padding_side="left")`, `Gemma3ForConditionalGeneration.from_pretrained(..., device_map="auto", torch_dtype=torch.bfloat16, attn_implementation="eager")`.
32
+ - Supports images/videos with `<image>` tags; enforce file count limits if multimodal chat is desired.
33
+ - `deepseek-vl2-small` (assumed similar VLM):
34
+ - Inspect loader; likely `AutoProcessor` + vision-language model. Use `device_map="auto"`, `torch_dtype=torch.bfloat16`, and streaming generation.
35
+ - `ibm-granite/granite-vision*`:
36
+ - Vision-language; may need `trust_remote_code`; keep sample image handling (resize/pad) patterns from demo.
37
+ - `ibm-granite/granite-docling-258M`:
38
+ - Uses `AutoProcessor` + `Idefics3ForConditionalGeneration`; requires `use_auth_token=True`, `torch_dtype=torch.bfloat16`, `device_map=device`.
39
+ - Handles DocTags markup and bounding boxes; we can limit to text answers (no drawing) to reduce complexity initially.
40
+ - For all: wrap generation in a common interface that accepts (query, retrieved_chunks, params) → streamed text. Unify token stopping; cap max_new_tokens modestly to stay within ZeroGPU limits.
41
+
42
+ ## 5) Retrieval/Chunking specifics
43
+ - Chunking: reuse existing 500/50 word overlap; keep MedCPT encoder (article + query) loaded once behind `@spaces.GPU()` guard. Ensure tokenizer truncation to MAX lengths (as in current config).
44
+ - FAISS: build `IndexFlatIP` for MedCPT (768-d). Convert embeddings to float32 before add. Store index/metadata under `/data`.
45
+ - Metadata structure: `[{"doc_id": int, "filename": str, "chunk_id": int, "text": str}]`.
46
+ - Query flow: embed query → FAISS search k → concatenate top-k texts into prompt. Skip rerank if cross-encoder too heavy for ZeroGPU; optional flag if GPU allows.
47
+
48
+ ## 6) Hugging Face build/runtime
49
+ - Files:
50
+ - `app.py` (entry), `requirements.txt` (gradio, transformers, faiss-cpu, torch==2.3+cpu?; on ZeroGPU torch with CUDA is preinstalled), optional `runtime.txt` (e.g., `python-3.11`), `README.md` with HF front matter, small sample PDFs.
51
+ - If MedCPT or Granite needs auth, set `HF_TOKEN` secret; use `use_auth_token=True` where required.
52
+ - Caching: set `HF_HOME=/data/.cache` to persist model weights between restarts.
53
+ - No Docker needed unless custom system deps; prefer pure pip for Spaces.
54
+
55
+ ## 7) UI/UX sketch
56
+ - Left rail: API/token textbox (for private models if needed), PDF upload & process button, model dropdown (generator), k slider, max_tokens slider, temperature/top_p, rerank checkbox (if available).
57
+ - Right: chat box with answer; collapsible context display; status messages (index loaded/building).
58
+ - Streaming answers with stop button; show retrieval scores optionally.
59
+
60
+ ## 8) Testing & rollout
61
+ - Dry-run locally on CPU with smallest model (e.g., granite-docling-258M) if possible; otherwise rely on HF logs.
62
+ - Deploy to a test Space (ZeroGPU), confirm model loads, index builds on uploaded PDFs, and chat returns grounded answers.
63
+ - Measure cold-start load times per model; consider pre-pinning a default lightweight model to speed startup.
64
+
65
+ ## 9) Next steps to implement
66
+ 1) Scaffold `MedDiscover-HF/app.py` with common loaders, retrieval pipeline, Gradio UI, and `/data` persistence.
67
+ 2) Add `requirements.txt` mirroring demos (`gradio>=5.x`, `transformers`, `faiss-cpu`, `torch`, `spaces`, model-specific libs like `openai-harmony`, `opencv-python` if doing video).
68
+ 3) Wire model registry for gpt-oss-20b, gemma-3-12b-it, deepseek-vl2-small, granite-vision, granite-docling; test generation stubs with mock context.
69
+ 4) Integrate MedCPT embedding + FAISS; add index build/load actions and guard GPU usage with `@spaces.GPU()`.
70
+ 5) Push to HF Space; validate ZeroGPU compatibility and memory footprints; tune max token defaults per model.
README.md ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: MedDiscover-HF
3
+ emoji: 🩺
4
+ colorFrom: purple
5
+ colorTo: blue
6
+ sdk: gradio
7
+ sdk_version: 5.44.1
8
+ app_file: app.py
9
+ pinned: false
10
+ license: apache-2.0
11
+ short_description: MedDiscover RAG with OSS HF models on ZeroGPU
12
+ ---
13
+
14
+ ## MedDiscover-HF
15
+ A Hugging Face Spaces-ready Gradio app that runs MedDiscover-style RAG (MedCPT embeddings + FAISS) with open-weight HF generator models (gpt-oss-20b, gemma-3-12b-it, deepseek-vl2-small, granite vision/docling variants) on ZeroGPU.
16
+
17
+ ### Notes
18
+ - Uses `/data` for cached FAISS index/metadata and HF model cache (`HF_HOME=/data/.cache`).
19
+ - No external API keys required; optionally use `HF_TOKEN` secret if models need auth.
20
+ - Chunking: 500 words with 50-word overlap; MedCPT encoder for embeddings; FAISS IP index.
21
+ - Generation is context-grounded; drop-down selects generator model.
app.py ADDED
@@ -0,0 +1,372 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ MedDiscover-HF: Hugging Face Spaces-ready Gradio app.
3
+ - ZeroGPU-compatible (uses @spaces.GPU for heavy ops)
4
+ - MedCPT embeddings + FAISS retrieval over uploaded PDFs
5
+ - OSS generator model dropdown (gpt-oss-20b, gemma-3-12b-it, deepseek-vl2-small, granite vision/docling)
6
+ """
7
+
8
+ import os
9
+ import json
10
+ from pathlib import Path
11
+ from threading import Thread
12
+ from typing import List, Dict, Tuple
13
+
14
+ import faiss
15
+ import gradio as gr
16
+ import numpy as np
17
+ import spaces
18
+ import torch
19
+ from PyPDF2 import PdfReader
20
+ from transformers import (
21
+ AutoModel,
22
+ AutoTokenizer,
23
+ AutoProcessor,
24
+ TextIteratorStreamer,
25
+ pipeline,
26
+ )
27
+
28
+ # ----------------------------
29
+ # Paths and env configuration
30
+ # ----------------------------
31
+ BASE_DIR = Path(__file__).parent
32
+ DATA_DIR = Path(os.getenv("DATA_DIR", "/data"))
33
+ DATA_DIR.mkdir(parents=True, exist_ok=True)
34
+ INDEX_PATH = DATA_DIR / "faiss_index.bin"
35
+ META_PATH = DATA_DIR / "doc_metadata.json"
36
+ HF_TOKEN = os.getenv("HF_TOKEN") # set in Space secrets if needed
37
+ HF_HOME = os.getenv("HF_HOME", str(DATA_DIR / ".cache"))
38
+ os.environ["HF_HOME"] = HF_HOME
39
+
40
+ # ----------------------------
41
+ # Chunking / PDF utils
42
+ # ----------------------------
43
+ CHUNK_SIZE = 500
44
+ OVERLAP = 50
45
+
46
+
47
+ def chunk_text(text: str, chunk_size: int = CHUNK_SIZE, overlap: int = OVERLAP) -> List[str]:
48
+ words = text.split()
49
+ chunks = []
50
+ start = 0
51
+ while start < len(words):
52
+ end = start + chunk_size
53
+ chunk = words[start:end]
54
+ if not chunk:
55
+ break
56
+ chunks.append(" ".join(chunk))
57
+ start += (chunk_size - overlap)
58
+ return chunks
59
+
60
+
61
+ def extract_text_from_pdf(path: str) -> str:
62
+ buff = []
63
+ try:
64
+ reader = PdfReader(path)
65
+ for page in reader.pages:
66
+ text = page.extract_text()
67
+ if text:
68
+ buff.append(text)
69
+ except Exception as e: # pragma: no cover
70
+ return f"Error reading {path}: {e}"
71
+ return "\n".join(buff)
72
+
73
+
74
+ # ----------------------------
75
+ # Embeddings: MedCPT encoders
76
+ # ----------------------------
77
+ MEDCPT_ARTICLE = "ncbi/MedCPT-Article-Encoder"
78
+ MEDCPT_QUERY = "ncbi/MedCPT-Query-Encoder"
79
+ MAX_ART_LEN = 512
80
+ MAX_QUERY_LEN = 64
81
+ EMBED_DIM = 768
82
+
83
+ _article_tok = None
84
+ _article_model = None
85
+ _query_tok = None
86
+ _query_model = None
87
+ DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
88
+
89
+
90
+ def load_medcpt():
91
+ global _article_tok, _article_model, _query_tok, _query_model
92
+ if _article_model and _query_model:
93
+ return
94
+ _article_tok = AutoTokenizer.from_pretrained(MEDCPT_ARTICLE, use_auth_token=HF_TOKEN)
95
+ _article_model = AutoModel.from_pretrained(MEDCPT_ARTICLE, use_auth_token=HF_TOKEN)
96
+ _article_model.to(DEVICE).eval()
97
+ _query_tok = AutoTokenizer.from_pretrained(MEDCPT_QUERY, use_auth_token=HF_TOKEN)
98
+ _query_model = AutoModel.from_pretrained(MEDCPT_QUERY, use_auth_token=HF_TOKEN)
99
+ _query_model.to(DEVICE).eval()
100
+
101
+
102
+ @spaces.GPU()
103
+ def embed_chunks(chunks: List[str]) -> np.ndarray:
104
+ load_medcpt()
105
+ all_vecs = []
106
+ with torch.no_grad():
107
+ for i in range(0, len(chunks), 8):
108
+ batch = chunks[i : i + 8]
109
+ enc = _article_tok(
110
+ batch,
111
+ truncation=True,
112
+ padding=True,
113
+ return_tensors="pt",
114
+ max_length=MAX_ART_LEN,
115
+ ).to(DEVICE)
116
+ out = _article_model(**enc)
117
+ vec = out.last_hidden_state[:, 0, :].cpu().numpy()
118
+ all_vecs.append(vec)
119
+ if not all_vecs:
120
+ return np.array([])
121
+ return np.vstack(all_vecs)
122
+
123
+
124
+ @spaces.GPU()
125
+ def embed_query(query: str) -> np.ndarray:
126
+ load_medcpt()
127
+ with torch.no_grad():
128
+ enc = _query_tok(
129
+ query,
130
+ truncation=True,
131
+ padding=True,
132
+ return_tensors="pt",
133
+ max_length=MAX_QUERY_LEN,
134
+ ).to(DEVICE)
135
+ out = _query_model(**enc)
136
+ vec = out.last_hidden_state[:, 0, :].cpu().numpy()
137
+ return vec
138
+
139
+
140
+ # ----------------------------
141
+ # FAISS index helpers
142
+ # ----------------------------
143
+ def build_index(embeddings: np.ndarray) -> faiss.IndexFlatIP:
144
+ if embeddings.dtype != np.float32:
145
+ embeddings = embeddings.astype(np.float32)
146
+ index = faiss.IndexFlatIP(embeddings.shape[1])
147
+ index.add(embeddings)
148
+ return index
149
+
150
+
151
+ def save_index(index: faiss.IndexFlatIP, meta: List[Dict]):
152
+ faiss.write_index(index, str(INDEX_PATH))
153
+ META_PATH.write_text(json.dumps(meta, indent=2), encoding="utf-8")
154
+
155
+
156
+ def load_index() -> Tuple[faiss.IndexFlatIP, List[Dict]]:
157
+ if not INDEX_PATH.exists() or not META_PATH.exists():
158
+ return None, None
159
+ idx = faiss.read_index(str(INDEX_PATH))
160
+ meta = json.loads(META_PATH.read_text(encoding="utf-8"))
161
+ return idx, meta
162
+
163
+
164
+ def search(index: faiss.IndexFlatIP, meta: List[Dict], query_vec: np.ndarray, k: int) -> List[Dict]:
165
+ if query_vec.dtype != np.float32:
166
+ query_vec = query_vec.astype(np.float32)
167
+ scores, inds = index.search(query_vec, k)
168
+ candidates = []
169
+ for score, ind in zip(scores[0], inds[0]):
170
+ if ind < 0 or ind >= len(meta):
171
+ continue
172
+ item = dict(meta[ind])
173
+ item["retrieval_score"] = float(score)
174
+ candidates.append(item)
175
+ return candidates
176
+
177
+
178
+ # ----------------------------
179
+ # Model registry for generators
180
+ # ----------------------------
181
+ class GeneratorWrapper:
182
+ def __init__(self, name: str, load_fn):
183
+ self.name = name
184
+ self._load_fn = load_fn
185
+ self._pipe = None
186
+
187
+ def ensure(self):
188
+ if self._pipe is None:
189
+ self._pipe = self._load_fn()
190
+ return self._pipe
191
+
192
+ def generate_stream(self, prompt: str, max_new_tokens: int, temperature: float, top_p: float):
193
+ pipe = self.ensure()
194
+ streamer = TextIteratorStreamer(pipe.tokenizer, skip_special_tokens=True, skip_prompt=True)
195
+ kwargs = {
196
+ "max_new_tokens": max_new_tokens,
197
+ "do_sample": True,
198
+ "temperature": temperature,
199
+ "top_p": top_p,
200
+ "streamer": streamer,
201
+ "return_full_text": False,
202
+ }
203
+ thread = Thread(target=pipe, args=(prompt,), kwargs=kwargs)
204
+ thread.start()
205
+ for token in streamer:
206
+ yield token
207
+
208
+
209
+ def load_gpt_oss():
210
+ return pipeline(
211
+ "text-generation",
212
+ model="openai/gpt-oss-20b",
213
+ trust_remote_code=True,
214
+ device_map="auto",
215
+ torch_dtype="auto",
216
+ )
217
+
218
+
219
+ def load_gemma():
220
+ return pipeline(
221
+ "text-generation",
222
+ model="google/gemma-3-12b-it",
223
+ device_map="auto",
224
+ torch_dtype=torch.bfloat16,
225
+ )
226
+
227
+
228
+ def load_deepseek_vl2():
229
+ return pipeline(
230
+ "text-generation",
231
+ model="deepseek-ai/DeepSeek-VL2-small",
232
+ device_map="auto",
233
+ torch_dtype="auto",
234
+ trust_remote_code=True,
235
+ )
236
+
237
+
238
+ def load_granite_vision():
239
+ return pipeline(
240
+ "text-generation",
241
+ model="ibm-granite/granite-3.1-8b-instruct",
242
+ device_map="auto",
243
+ torch_dtype="auto",
244
+ trust_remote_code=True,
245
+ )
246
+
247
+
248
+ def load_granite_docling():
249
+ # Smaller text-centric model; requires auth for some users.
250
+ return pipeline(
251
+ "text-generation",
252
+ model="ibm-granite/granite-docling-258M",
253
+ device_map="auto",
254
+ torch_dtype="auto",
255
+ use_auth_token=HF_TOKEN,
256
+ )
257
+
258
+
259
+ GENERATORS = {
260
+ "gpt-oss-20b": GeneratorWrapper("gpt-oss-20b", load_gpt_oss),
261
+ "gemma-3-12b-it": GeneratorWrapper("gemma-3-12b-it", load_gemma),
262
+ "deepseek-vl2-small": GeneratorWrapper("deepseek-vl2-small", load_deepseek_vl2),
263
+ "granite-vision": GeneratorWrapper("granite-vision", load_granite_vision),
264
+ "granite-docling-258M": GeneratorWrapper("granite-docling-258M", load_granite_docling),
265
+ }
266
+
267
+
268
+ # ----------------------------
269
+ # Prompt formatting
270
+ # ----------------------------
271
+ def format_prompt(query: str, contexts: List[Dict]) -> str:
272
+ context_blocks = []
273
+ for i, c in enumerate(contexts):
274
+ context_blocks.append(
275
+ f"--- Context {i+1} (file={c.get('filename','N/A')} chunk={c.get('chunk_id','?')}) ---\n{c.get('text','')}"
276
+ )
277
+ joined = "\n\n".join(context_blocks) if context_blocks else "None."
278
+ prompt = (
279
+ "You are MedDiscover, a biomedical assistant. Use ONLY the provided context to answer concisely.\n"
280
+ "If the context does not contain the answer, reply: 'Not found in provided documents.'\n\n"
281
+ f"{joined}\n\nQuestion: {query}\nAnswer:"
282
+ )
283
+ return prompt
284
+
285
+
286
+ # ----------------------------
287
+ # Gradio callbacks
288
+ # ----------------------------
289
+ @spaces.GPU()
290
+ def process_pdfs(files, progress=gr.Progress()):
291
+ if not files:
292
+ return "Upload PDFs first."
293
+ texts = []
294
+ meta = []
295
+ doc_id = 0
296
+ for idx, f in enumerate(files):
297
+ progress(((idx) / max(len(files), 1)), desc=f"Reading {Path(f.name).name}")
298
+ text = extract_text_from_pdf(f.name)
299
+ if not text or text.startswith("Error reading"):
300
+ continue
301
+ chunks = chunk_text(text)
302
+ for cid, chunk in enumerate(chunks):
303
+ meta.append({"doc_id": doc_id, "filename": Path(f.name).name, "chunk_id": cid, "text": chunk})
304
+ texts.append(chunk)
305
+ doc_id += 1
306
+ if not texts:
307
+ return "No text extracted."
308
+ progress(0.7, desc=f"Embedding {len(texts)} chunks")
309
+ embeds = embed_chunks(texts)
310
+ if embeds.size == 0:
311
+ return "Embedding failed."
312
+ progress(0.85, desc="Building index")
313
+ idx = build_index(embeds)
314
+ save_index(idx, meta)
315
+ progress(1.0, desc="Ready")
316
+ return f"Processed {doc_id} PDFs. Index size={idx.ntotal}, dim={idx.d}. Saved to /data."
317
+
318
+
319
+ def handle_query(query, model_key, k, max_new_tokens, temperature, top_p):
320
+ if not query or query.strip() == "":
321
+ return "Enter a query", "No context"
322
+ idx, meta = load_index()
323
+ if idx is None or meta is None:
324
+ return "Index not ready. Process PDFs first.", "No context"
325
+ qvec = embed_query(query)
326
+ cands = search(idx, meta, qvec, int(k))
327
+ prompt = format_prompt(query, cands)
328
+ wrapper = GENERATORS[model_key]
329
+ stream = wrapper.generate_stream(prompt, int(max_new_tokens), float(temperature), float(top_p))
330
+ answer_accum = ""
331
+ for chunk in stream:
332
+ answer_accum += chunk
333
+ yield answer_accum, prompt
334
+
335
+
336
+ # ----------------------------
337
+ # Gradio UI
338
+ # ----------------------------
339
+ with gr.Blocks(title="MedDiscover-HF") as demo:
340
+ gr.Markdown("# 🩺 MedDiscover-HF\nRAG over your PDFs with OSS HF models on ZeroGPU.")
341
+ with gr.Row():
342
+ with gr.Column(scale=1):
343
+ api_info = gr.Markdown("Optional: set `HF_TOKEN` secret in Space settings if needed.")
344
+ pdfs = gr.File(label="Upload PDFs", file_types=[".pdf"], file_count="multiple")
345
+ process_btn = gr.Button("Process PDFs (chunk/embed/index)", variant="primary")
346
+ status = gr.Textbox(label="Status", interactive=False)
347
+ model_dd = gr.Dropdown(
348
+ label="Generator Model",
349
+ choices=list(GENERATORS.keys()),
350
+ value="gpt-oss-20b",
351
+ interactive=True,
352
+ )
353
+ k_slider = gr.Slider(1, 10, value=3, step=1, label="Top-k chunks")
354
+ max_tokens = gr.Slider(20, 512, value=128, step=8, label="Max new tokens")
355
+ temp = gr.Slider(0.1, 1.5, value=0.4, step=0.1, label="Temperature")
356
+ top_p = gr.Slider(0.1, 1.0, value=0.9, step=0.05, label="Top-p")
357
+ with gr.Column(scale=2):
358
+ query = gr.Textbox(label="Query", lines=3, placeholder="Ask about your documents...")
359
+ answer = gr.Textbox(label="Answer (streaming)", lines=6)
360
+ context_box = gr.Textbox(label="Context used in prompt", lines=14)
361
+ go_btn = gr.Button("Ask", variant="primary")
362
+
363
+ process_btn.click(fn=process_pdfs, inputs=pdfs, outputs=status, show_progress="full")
364
+ go_btn.click(
365
+ fn=handle_query,
366
+ inputs=[query, model_dd, k_slider, max_tokens, temp, top_p],
367
+ outputs=[answer, context_box],
368
+ concurrency_limit=1,
369
+ )
370
+
371
+ if __name__ == "__main__":
372
+ demo.queue().launch()
requirements.txt ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ gradio>=3.44,<=5.50
2
+ transformers>=4.40.0
3
+ faiss-cpu
4
+ PyPDF2
5
+ spaces
6
+ openai-harmony
7
+ sentencepiece
8
+ accelerate
9
+ pillow