Sathvik-kota commited on
Commit
6618f27
·
verified ·
1 Parent(s): 9530eca

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +7 -11
README.md CHANGED
@@ -76,7 +76,7 @@ A built-in evaluation workflow providing:
76
  - **nDCG@K**
77
  - Correct vs Incorrect queries
78
  - Per-query detailed table
79
- - Ideal for assignments, research, and experiments
80
 
81
  ---
82
 
@@ -129,8 +129,7 @@ If not → FAISS index is rebuilt from cached embeddings.
129
 
130
  ---
131
 
132
- # Folder Structure (MANDATORY SECTION)
133
-
134
  src/
135
  doc_service/
136
  app.py
@@ -249,20 +248,17 @@ L2 distance is used instead of cosine because:
249
  ### 4️⃣ **Local Embedding Cache**
250
  - Reduces startup time from **~5 seconds → <1 second**
251
  - Prevents **re-embedding identical documents**
252
- - Stores:
253
- - `embed_meta.json` filename hash → index
254
- - `embeddings.npy` → matrix of stored embeddings
255
- - Saves compute + makes repeated searches much faster
256
-
257
  ---
258
  ### 4️⃣FAISS Persistence (Warm Start Optimization)
259
  - Eliminates the need to rebuild index on each startup
260
- - Warm-loads instantly using try_load()
261
  - Ideal for Spaces & Docker environments
262
- - A vector-database
263
  ---
264
  ### 5️⃣ **LLM-Driven Explainability**
265
- - Generates **human-friendly reasoning**
266
  - Explains **why a document matched your query**
267
  - Combines:
268
  - Top semantic-matching sentences
 
76
  - **nDCG@K**
77
  - Correct vs Incorrect queries
78
  - Per-query detailed table
79
+
80
 
81
  ---
82
 
 
129
 
130
  ---
131
 
132
+ # Folder Structure
 
133
  src/
134
  doc_service/
135
  app.py
 
248
  ### 4️⃣ **Local Embedding Cache**
249
  - Reduces startup time from **~5 seconds → <1 second**
250
  - Prevents **re-embedding identical documents**
251
+ -Allows FAISS persistence to work smoothly
252
+ - Speeds up startup & indexing
 
 
 
253
  ---
254
  ### 4️⃣FAISS Persistence (Warm Start Optimization)
255
  - Eliminates the need to rebuild index on each startup
256
+ - Warm-loads instantly at startup
257
  - Ideal for Spaces & Docker environments
258
+ - A lightweight vector-database
259
  ---
260
  ### 5️⃣ **LLM-Driven Explainability**
261
+ - Generates **human-friendly reasoning**. Makes search results more interpretable and intelligent.
262
  - Explains **why a document matched your query**
263
  - Combines:
264
  - Top semantic-matching sentences