Harshdhsvguyt commited on
Commit
754d8d3
·
verified ·
1 Parent(s): 5516a25

Upload 19 files

Browse files
LICENSE ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright [yyyy] [name of copyright owner]
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
README.md CHANGED
@@ -1,14 +1,40 @@
1
- ---
2
- title: Policy Rag Assistant
3
- emoji: 📊
4
- colorFrom: indigo
5
- colorTo: purple
6
- sdk: gradio
7
- sdk_version: 6.5.1
8
- app_file: app.py
9
- pinned: false
10
- license: mit
11
- short_description: Policy RAG assistant with prompt comparison, grounded QA
12
- ---
13
-
14
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Policy_RAG_Assistant
2
+
3
+ A minimal Retrieval-Augmented Generation (RAG) system that answers questions about company policy documents using grounded retrieval and structured prompting.
4
+
5
+ This project focuses on **prompt engineering, hallucination reduction, and evaluation**, rather than complex UI or heavy frameworks.
6
+
7
+ ---
8
+
9
+ ## Overview
10
+
11
+ The Policy RAG Assistant allows users to upload policy documents (PDF, TXT, MD) and ask questions about them.
12
+
13
+ The system:
14
+
15
+ - Retrieves relevant document chunks using semantic search
16
+ - Generates grounded answers using an LLM
17
+ - Avoids hallucinations using strict prompt design
18
+ - Provides structured evaluation metrics for responses
19
+
20
+ ---
21
+
22
+ ## Architecture Overview
23
+ User Question
24
+
25
+
26
+ Semantic Retrieval (ChromaDB)
27
+
28
+
29
+ Top-K Relevant Chunks
30
+
31
+
32
+ Prompt Builder (Initial / Improved)
33
+
34
+
35
+ Groq LLM (Llama 3.1)
36
+
37
+
38
+ Structured JSON Response + Evaluation
39
+
40
+
app.py ADDED
@@ -0,0 +1,198 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import streamlit as st
2
+ from src.loader import load_documents
3
+ from src.chunking import chunk_documents
4
+ from src.vectorstore import VectorStore
5
+ from src.rag_pipeline import RAGPipeline
6
+ from src.utils import ensure_directories
7
+ from src.evaluation import analyze_confidence_distribution
8
+ import os
9
+ import tempfile
10
+ from pathlib import Path
11
+
12
+
13
+ # Page config
14
+ st.set_page_config(page_title="Policy RAG Assistant", layout="wide")
15
+
16
+ # Initialize
17
+ ensure_directories()
18
+
19
+ # Check API key
20
+ if not os.getenv("GROQ_API_KEY"):
21
+ st.error("GROQ_API_KEY not set. Please set it as an environment variable.")
22
+ st.stop()
23
+
24
+ # Initialize session state
25
+ if "vector_store" not in st.session_state:
26
+ st.session_state.vector_store = None
27
+ if "rag_pipeline" not in st.session_state:
28
+ st.session_state.rag_pipeline = None
29
+ if "uploaded_files_count" not in st.session_state:
30
+ st.session_state.uploaded_files_count = 0
31
+
32
+ # Title
33
+ st.title("Policy RAG Assistant")
34
+ st.markdown("Ask questions about company policies")
35
+
36
+ # Sidebar
37
+ with st.sidebar:
38
+ st.header("Setup")
39
+
40
+ upload_method = st.radio(
41
+ "Choose upload method:",
42
+ ["Upload files here", "Load from data/policies/"],
43
+ key="upload_method"
44
+ )
45
+
46
+ if upload_method == "Upload files here":
47
+ uploaded_files = st.file_uploader(
48
+ "Upload policy documents",
49
+ type=["pdf", "txt", "md"],
50
+ accept_multiple_files=True,
51
+ )
52
+
53
+ if uploaded_files and st.button("Process Uploaded Files"):
54
+ with st.spinner("Processing uploaded files..."):
55
+ from src.loader import load_pdf, load_text
56
+
57
+ docs = []
58
+ for uploaded_file in uploaded_files:
59
+ try:
60
+ with tempfile.NamedTemporaryFile(delete=False, suffix=Path(uploaded_file.name).suffix) as tmp_file:
61
+ tmp_file.write(uploaded_file.getvalue())
62
+ tmp_path = Path(tmp_file.name)
63
+
64
+ if tmp_path.suffix.lower() == ".pdf":
65
+ text = load_pdf(tmp_path)
66
+ elif tmp_path.suffix.lower() in [".txt", ".md"]:
67
+ text = load_text(tmp_path)
68
+ else:
69
+ continue
70
+
71
+ if text.strip():
72
+ docs.append({
73
+ "text": text,
74
+ "metadata": {
75
+ "source": uploaded_file.name,
76
+ "type": tmp_path.suffix[1:]
77
+ }
78
+ })
79
+
80
+ tmp_path.unlink()
81
+
82
+ except Exception as e:
83
+ st.error(f"Error processing {uploaded_file.name}: {e}")
84
+
85
+ if docs:
86
+ chunked = chunk_documents(docs, chunk_size=500, overlap=100)
87
+
88
+ vector_store = VectorStore()
89
+ vector_store.reset()
90
+ vector_store.add_documents(chunked)
91
+
92
+ st.session_state.vector_store = vector_store
93
+ st.session_state.rag_pipeline = RAGPipeline(vector_store)
94
+ st.session_state.uploaded_files_count = len(docs)
95
+
96
+ st.success(f"Processed {len(docs)} documents, {len(chunked)} chunks")
97
+ else:
98
+ st.warning("No valid documents were processed")
99
+
100
+ else:
101
+ if st.button("Load Documents from Folder"):
102
+ with st.spinner("Loading documents..."):
103
+ docs = load_documents()
104
+ if docs:
105
+ chunked = chunk_documents(docs, chunk_size=500, overlap=100)
106
+
107
+ vector_store = VectorStore()
108
+ vector_store.reset()
109
+ vector_store.add_documents(chunked)
110
+
111
+ st.session_state.vector_store = vector_store
112
+ st.session_state.rag_pipeline = RAGPipeline(vector_store)
113
+ st.session_state.uploaded_files_count = len(docs)
114
+
115
+ st.success(f"Loaded {len(docs)} documents, {len(chunked)} chunks")
116
+ else:
117
+ st.warning("No documents found in data/policies/")
118
+
119
+ if st.session_state.vector_store:
120
+ st.divider()
121
+ col1, col2 = st.columns(2)
122
+ with col1:
123
+ st.metric("Documents", st.session_state.uploaded_files_count)
124
+ with col2:
125
+ st.metric("Total Chunks", st.session_state.vector_store.count())
126
+
127
+ st.divider()
128
+
129
+ st.header("Analytics")
130
+ if st.button("View Stats"):
131
+ stats = analyze_confidence_distribution()
132
+ st.json(stats)
133
+
134
+ # Main area
135
+ if st.session_state.rag_pipeline is None:
136
+ st.info("Upload documents or load from folder in the sidebar to get started")
137
+ else:
138
+ col1, col2 = st.columns([3, 1])
139
+
140
+ with col1:
141
+ question = st.text_input("Ask a question:", placeholder="e.g., What is the vacation policy?")
142
+
143
+ with col2:
144
+ prompt_type = st.selectbox("Prompt:", ["improved", "initial", "compare"])
145
+
146
+ if question:
147
+ if prompt_type == "compare":
148
+
149
+ colA, colB = st.columns(2)
150
+
151
+ with colA:
152
+ st.subheader("Initial Prompt Result")
153
+ result_initial = st.session_state.rag_pipeline.query(question, prompt_type="initial")
154
+ st.write(result_initial["answer"])
155
+ st.metric("Confidence", result_initial.get("confidence", "N/A"))
156
+ if result_initial.get("evaluation"):
157
+ st.json(result_initial["evaluation"])
158
+
159
+ with colB:
160
+ st.subheader("Improved Prompt Result")
161
+ result_improved = st.session_state.rag_pipeline.query(question, prompt_type="improved")
162
+ st.write(result_improved["answer"])
163
+ st.metric("Confidence", result_improved.get("confidence", "N/A"))
164
+ if result_improved.get("evaluation"):
165
+ st.json(result_improved["evaluation"])
166
+
167
+ display_chunks = result_improved["retrieved_chunks"]
168
+
169
+ else:
170
+ with st.spinner("Searching..."):
171
+ response = st.session_state.rag_pipeline.query(question, prompt_type=prompt_type)
172
+
173
+ st.markdown("### Answer")
174
+ st.write(response["answer"])
175
+
176
+ col1, col2 = st.columns(2)
177
+ with col1:
178
+ st.metric("Confidence", response.get("confidence", "N/A"))
179
+ with col2:
180
+ st.metric("Sources Used", len(response["retrieved_chunks"]))
181
+
182
+ if response.get("evaluation"):
183
+ st.subheader("Evaluation")
184
+ st.json(response["evaluation"])
185
+
186
+ if response.get("evidence"):
187
+ with st.expander("Evidence"):
188
+ for i, ev in enumerate(response["evidence"], 1):
189
+ st.markdown(f"{i}. {ev}")
190
+
191
+ display_chunks = response["retrieved_chunks"]
192
+
193
+ with st.expander("Retrieved Chunks"):
194
+ for i, chunk in enumerate(display_chunks, 1):
195
+ st.markdown(f"Chunk {i} (score: {chunk.get('score', 0):.4f})")
196
+ st.markdown(f"Source: {chunk.get('metadata', {}).get('source', 'Unknown')}")
197
+ st.text(chunk["text"][:300] + "..." if len(chunk["text"]) > 300 else chunk["text"])
198
+ st.divider()
main.py ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import sys
2
+ import os
3
+ from dotenv import load_dotenv
4
+
5
+ from src.loader import load_documents
6
+ from src.chunking import chunk_documents
7
+ from src.vectorstore import VectorStore
8
+ from src.rag_pipeline import RAGPipeline
9
+ from src.utils import ensure_directories
10
+
11
+ # Load environment variables
12
+ load_dotenv()
13
+
14
+
15
+ def setup_vector_store():
16
+ """Initialize and populate vector store."""
17
+ print("Loading documents...")
18
+ docs = load_documents()
19
+
20
+ if not docs:
21
+ print("No documents found in data/policies/")
22
+ sys.exit(1)
23
+
24
+ print(f"Loaded {len(docs)} documents")
25
+
26
+ print("Chunking documents...")
27
+ chunked = chunk_documents(docs, chunk_size=500, overlap=100)
28
+ print(f"Created {len(chunked)} chunks")
29
+
30
+ print("Initializing vector store...")
31
+ vector_store = VectorStore()
32
+ vector_store.reset()
33
+ vector_store.add_documents(chunked)
34
+
35
+ print("Setup complete!")
36
+ return vector_store
37
+
38
+
39
+ def main():
40
+ """CLI interface for RAG pipeline."""
41
+ ensure_directories()
42
+
43
+ # ------------------------------------------------
44
+ # Check API key
45
+ # ------------------------------------------------
46
+ if not os.getenv("GROQ_API_KEY"):
47
+ print("Error: GROQ_API_KEY environment variable not set")
48
+ sys.exit(1)
49
+
50
+ # ------------------------------------------------
51
+ # Get question from command line
52
+ # ------------------------------------------------
53
+ if len(sys.argv) < 2:
54
+ print("Usage: python main.py 'Your question here'")
55
+ sys.exit(1)
56
+
57
+ question = " ".join(sys.argv[1:])
58
+
59
+ # ------------------------------------------------
60
+ # Setup RAG pipeline
61
+ # ------------------------------------------------
62
+ vector_store = setup_vector_store()
63
+ rag_pipeline = RAGPipeline(vector_store)
64
+
65
+ # ------------------------------------------------
66
+ # Query
67
+ # ------------------------------------------------
68
+ print(f"\nQuestion: {question}\n")
69
+
70
+ response = rag_pipeline.query(question, prompt_type="improved")
71
+
72
+ # ------------------------------------------------
73
+ # Display Results
74
+ # ------------------------------------------------
75
+ print("=" * 80)
76
+ print("ANSWER:")
77
+ print(response["answer"])
78
+
79
+ print("\n" + "=" * 80)
80
+ print(f"Confidence: {response.get('confidence', 'N/A')}")
81
+ print(f"Sources Retrieved: {len(response['retrieved_chunks'])}")
82
+
83
+ # Show retrieved chunk preview ( looks professional)
84
+ if response.get("retrieved_chunks"):
85
+ print("\nRETRIEVED CONTEXT PREVIEW:")
86
+ for i, chunk in enumerate(response["retrieved_chunks"], 1):
87
+ preview = chunk["text"][:120].replace("\n", " ")
88
+ print(f"{i}. {preview}...")
89
+
90
+ if response.get("evidence"):
91
+ print("\nEVIDENCE:")
92
+ for i, ev in enumerate(response["evidence"], 1):
93
+ print(f"{i}. {ev}")
94
+
95
+ # NEW: Evaluation Metrics
96
+ if response.get("evaluation"):
97
+ print("\n" + "=" * 80)
98
+ print("EVALUATION:")
99
+ for k, v in response["evaluation"].items():
100
+ print(f"{k}: {v}")
101
+
102
+ print("\n" + "=" * 80)
103
+
104
+
105
+ if __name__ == "__main__":
106
+ main()
requirements.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ streamlit
2
+ chromadb
3
+ sentence-transformers
4
+ groq
5
+ python-dotenv
6
+ PyPDF2
src/__pycache__/chunking.cpython-313.pyc ADDED
Binary file (1.97 kB). View file
 
src/__pycache__/evaluation.cpython-313.pyc ADDED
Binary file (2.03 kB). View file
 
src/__pycache__/loader.cpython-313.pyc ADDED
Binary file (2.84 kB). View file
 
src/__pycache__/prompts.cpython-313.pyc ADDED
Binary file (2.05 kB). View file
 
src/__pycache__/rag_pipeline.cpython-313.pyc ADDED
Binary file (5.08 kB). View file
 
src/__pycache__/utils.cpython-313.pyc ADDED
Binary file (3.76 kB). View file
 
src/__pycache__/vectorstore.cpython-313.pyc ADDED
Binary file (4.37 kB). View file
 
src/chunking.py ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import List
2
+
3
+
4
+ def chunk_text(text: str, chunk_size: int = 500, overlap: int = 100) -> List[str]:
5
+ """
6
+ Split text into overlapping chunks based on word count.
7
+
8
+ Args:
9
+ text: Input text to chunk
10
+ chunk_size: Number of words per chunk
11
+ overlap: Number of overlapping words between chunks
12
+
13
+ Returns:
14
+ List of text chunks
15
+ """
16
+ words = text.split()
17
+ chunks = []
18
+
19
+ if len(words) <= chunk_size:
20
+ return [text]
21
+
22
+ start = 0
23
+ while start < len(words):
24
+ end = start + chunk_size
25
+ chunk_words = words[start:end]
26
+ chunks.append(" ".join(chunk_words))
27
+
28
+ if end >= len(words):
29
+ break
30
+
31
+ start = end - overlap
32
+
33
+ return chunks
34
+
35
+
36
+ def chunk_documents(documents: List[dict], chunk_size: int = 500, overlap: int = 100) -> List[dict]:
37
+ """
38
+ Chunk multiple documents while preserving metadata.
39
+
40
+ Returns:
41
+ List of dicts with 'text' and 'metadata' keys
42
+ """
43
+ chunked_docs = []
44
+
45
+ for doc in documents:
46
+ text = doc["text"]
47
+ metadata = doc.get("metadata", {})
48
+
49
+ chunks = chunk_text(text, chunk_size, overlap)
50
+
51
+ for i, chunk in enumerate(chunks):
52
+ chunked_docs.append({
53
+ "text": chunk,
54
+ "metadata": {
55
+ **metadata,
56
+ "chunk_id": i,
57
+ "total_chunks": len(chunks)
58
+ }
59
+ })
60
+
61
+ return chunked_docs
src/evaluation.py ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ from pathlib import Path
3
+ from typing import List, Dict
4
+
5
+
6
+ def load_queries_log(log_file: str = "logs/queries.jsonl") -> List[Dict]:
7
+ """Load all logged queries."""
8
+ queries = []
9
+ if not Path(log_file).exists():
10
+ return queries
11
+
12
+ with open(log_file, "r") as f:
13
+ for line in f:
14
+ queries.append(json.loads(line))
15
+
16
+ return queries
17
+
18
+
19
+ def analyze_confidence_distribution(log_file: str = "logs/queries.jsonl") -> Dict:
20
+ """Analyze confidence score distribution from logs."""
21
+ queries = load_queries_log(log_file)
22
+
23
+ confidence_counts = {"High": 0, "Medium": 0, "Low": 0, "N/A": 0}
24
+
25
+ for query in queries:
26
+ confidence = query.get("response", {}).get("confidence", "N/A")
27
+ confidence_counts[confidence] = confidence_counts.get(confidence, 0) + 1
28
+
29
+ return {
30
+ "total_queries": len(queries),
31
+ "confidence_distribution": confidence_counts
32
+ }
33
+
34
+
35
+ def compare_prompts(question: str, rag_pipeline) -> Dict:
36
+ """Compare initial vs improved prompt responses."""
37
+ initial_response = rag_pipeline.query(question, prompt_type="initial")
38
+ improved_response = rag_pipeline.query(question, prompt_type="improved")
39
+
40
+ return {
41
+ "question": question,
42
+ "initial": initial_response,
43
+ "improved": improved_response
44
+ }
src/loader.py ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from pathlib import Path
3
+ from typing import List
4
+ import PyPDF2
5
+
6
+
7
+ def load_documents(directory: str = "data/policies") -> List[dict]:
8
+ """
9
+ Load all documents from the policies directory.
10
+ Supports PDF, TXT, and MD files.
11
+
12
+ Returns:
13
+ List of dicts with 'text' and 'metadata' keys
14
+ """
15
+ documents = []
16
+ policy_dir = Path(directory)
17
+
18
+ if not policy_dir.exists():
19
+ print(f"Warning: {directory} does not exist")
20
+ return documents
21
+
22
+ for file_path in policy_dir.iterdir():
23
+ if file_path.is_file():
24
+ try:
25
+ if file_path.suffix.lower() == ".pdf":
26
+ text = load_pdf(file_path)
27
+ elif file_path.suffix.lower() in [".txt", ".md"]:
28
+ text = load_text(file_path)
29
+ else:
30
+ continue
31
+
32
+ if text.strip():
33
+ documents.append({
34
+ "text": text,
35
+ "metadata": {
36
+ "source": file_path.name,
37
+ "type": file_path.suffix[1:]
38
+ }
39
+ })
40
+ print(f"Loaded: {file_path.name}")
41
+ except Exception as e:
42
+ print(f"Error loading {file_path.name}: {e}")
43
+
44
+ return documents
45
+
46
+
47
+ def load_pdf(file_path: Path) -> str:
48
+ """Extract text from PDF file."""
49
+ text = []
50
+ with open(file_path, "rb") as f:
51
+ reader = PyPDF2.PdfReader(f)
52
+ for page in reader.pages:
53
+ text.append(page.extract_text())
54
+ return "\n".join(text)
55
+
56
+
57
+ def load_text(file_path: Path) -> str:
58
+ """Load text from TXT or MD file."""
59
+ with open(file_path, "r", encoding="utf-8") as f:
60
+ return f.read()
src/prompts.py ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INITIAL_PROMPT = """You are a helpful assistant that answers questions about company policies.
2
+
3
+ Context:
4
+ {context}
5
+
6
+ Question: {question}
7
+
8
+ Answer the question based on the context provided above."""
9
+
10
+
11
+ IMPROVED_PROMPT = """You are a RETRIEVAL-GROUNDED Policy Question Answering Assistant.
12
+
13
+ Your job is to answer strictly using the provided CONTEXT.
14
+ You are NOT allowed to use outside knowledge.
15
+
16
+ Follow these steps internally:
17
+ 1. Read the context carefully.
18
+ 2. Identify exact sentences that answer the question.
19
+ 3. If no supporting sentences exist, reply:
20
+ "I don't know based on the provided documents."
21
+
22
+ STRICT RULES:
23
+ - Do NOT guess.
24
+ - Do NOT add new information.
25
+ - Every claim MUST be supported by a quote from CONTEXT.
26
+ - Evidence MUST be SHORT DIRECT QUOTES copied exactly from the context.
27
+ - If evidence is missing → answer must be "I don't know based on the provided documents."
28
+
29
+ CONTEXT:
30
+ {context}
31
+
32
+ QUESTION:
33
+ {question}
34
+
35
+ Return ONLY valid JSON:
36
+
37
+ {{
38
+ "answer": "Grounded answer or 'I don't know based on the provided documents.'",
39
+ "evidence": ["exact short quote 1", "exact short quote 2"],
40
+ "confidence": "High|Medium|Low"
41
+ }}
42
+
43
+ Confidence Guidelines:
44
+ - High → Answer explicitly stated in one place
45
+ - Medium → Requires combining multiple context sections
46
+ - Low → Weak or partial support
47
+
48
+ JSON Response:"""
49
+
50
+
51
+
52
+ def get_prompt(prompt_type: str, context: str, question: str) -> str:
53
+ """
54
+ Get formatted prompt.
55
+
56
+ Args:
57
+ prompt_type: "initial" or "improved"
58
+ context: Retrieved document context
59
+ question: User question
60
+
61
+ Returns:
62
+ Formatted prompt string
63
+ """
64
+ if prompt_type == "initial":
65
+ template = INITIAL_PROMPT
66
+ else:
67
+ template = IMPROVED_PROMPT
68
+
69
+ return template.format(context=context, question=question)
src/rag_pipeline.py ADDED
@@ -0,0 +1,171 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from groq import Groq
2
+ from typing import List, Dict
3
+ from src.vectorstore import VectorStore
4
+ from src.prompts import get_prompt
5
+ from src.utils import safe_json_parse, log_query, get_groq_api_key, evaluate_response
6
+
7
+ import os
8
+ from dotenv import load_dotenv
9
+
10
+ load_dotenv()
11
+
12
+
13
+ class RAGPipeline:
14
+ """Main RAG pipeline for question answering."""
15
+
16
+ def __init__(self, vector_store: VectorStore, model: str = "llama-3.1-8b-instant"):
17
+ """Initialize RAG pipeline."""
18
+ self.vector_store = vector_store
19
+ self.model = model
20
+ self.client = Groq(api_key=get_groq_api_key())
21
+
22
+ def query(self, question: str, prompt_type: str = "improved", top_k: int = 5) -> Dict:
23
+ """
24
+ Answer a question using RAG.
25
+ """
26
+
27
+ # ------------------------------------------------
28
+ # 1️⃣ Retrieve relevant documents
29
+ # ------------------------------------------------
30
+ retrieved_chunks = self.vector_store.search(question, top_k=top_k)
31
+
32
+ # Apply simple reranking (BONUS FEATURE)
33
+ if retrieved_chunks:
34
+ retrieved_chunks = self.rerank_simple(retrieved_chunks, question)
35
+
36
+ # ------------------------------------------------
37
+ # 2️⃣ Handle case where nothing retrieved
38
+ # ------------------------------------------------
39
+ if not retrieved_chunks:
40
+ response = {
41
+ "answer": "I don't know based on the provided documents.",
42
+ "evidence": [],
43
+ "confidence": "Low",
44
+ "retrieved_chunks": []
45
+ }
46
+
47
+ # Add evaluation metrics
48
+ evaluation = evaluate_response(question, response, prompt_type)
49
+ response["evaluation"] = evaluation
50
+
51
+ log_query(question, [], response, prompt_type)
52
+ return response
53
+
54
+ # ------------------------------------------------
55
+ # 3️⃣ Build context
56
+ # ------------------------------------------------
57
+ context = self._build_context(retrieved_chunks)
58
+
59
+ # (Optional safety) Prevent overly long context
60
+ context = context[:4000]
61
+
62
+ # ------------------------------------------------
63
+ # 4️⃣ Create prompt
64
+ # ------------------------------------------------
65
+ prompt = get_prompt(prompt_type, context, question)
66
+
67
+ # ------------------------------------------------
68
+ # 5️⃣ Call Groq API
69
+ # ------------------------------------------------
70
+ try:
71
+ completion = self.client.chat.completions.create(
72
+ model=self.model,
73
+ messages=[{"role": "user", "content": prompt}],
74
+ temperature=0.0, # more deterministic for RAG
75
+ max_tokens=1024
76
+ )
77
+
78
+ response_text = completion.choices[0].message.content
79
+
80
+ # ------------------------------------------------
81
+ # 6️⃣ Parse response
82
+ # ------------------------------------------------
83
+ if prompt_type == "improved":
84
+ parsed = safe_json_parse(response_text)
85
+
86
+ if parsed:
87
+ response = {
88
+ "answer": parsed.get("answer", response_text),
89
+ "evidence": parsed.get("evidence", []),
90
+ "confidence": parsed.get("confidence", "Medium"),
91
+ "retrieved_chunks": retrieved_chunks
92
+ }
93
+ else:
94
+ # Fallback if JSON parsing fails
95
+ response = {
96
+ "answer": response_text,
97
+ "evidence": [],
98
+ "confidence": "Medium",
99
+ "retrieved_chunks": retrieved_chunks
100
+ }
101
+ else:
102
+ response = {
103
+ "answer": response_text,
104
+ "evidence": [],
105
+ "confidence": "N/A",
106
+ "retrieved_chunks": retrieved_chunks
107
+ }
108
+
109
+ # ------------------------------------------------
110
+ # 7️⃣ Add Evaluation Metrics (NEW)
111
+ # ------------------------------------------------
112
+ evaluation = evaluate_response(question, response, prompt_type)
113
+ response["evaluation"] = evaluation
114
+
115
+ # ------------------------------------------------
116
+ # 8️⃣ Log Query
117
+ # ------------------------------------------------
118
+ log_query(question, retrieved_chunks, response, prompt_type)
119
+
120
+ return response
121
+
122
+ except Exception as e:
123
+ print(f"Error calling LLM: {e}")
124
+
125
+ response = {
126
+ "answer": "The system encountered an error while generating a response.",
127
+ "evidence": [],
128
+ "confidence": "Low",
129
+ "retrieved_chunks": retrieved_chunks
130
+ }
131
+
132
+ evaluation = evaluate_response(question, response, prompt_type)
133
+ response["evaluation"] = evaluation
134
+
135
+ return response
136
+
137
+ # ------------------------------------------------
138
+ # Helper: Build Context
139
+ # ------------------------------------------------
140
+ def _build_context(self, chunks: List[Dict]) -> str:
141
+ """Build context string from retrieved chunks."""
142
+ context_parts = []
143
+
144
+ for i, chunk in enumerate(chunks, 1):
145
+ source = chunk.get("metadata", {}).get("source", "Unknown")
146
+ text = chunk["text"]
147
+ context_parts.append(f"[Document {i} - {source}]\n{text}\n")
148
+
149
+ return "\n".join(context_parts)
150
+
151
+ # ------------------------------------------------
152
+ # BONUS: Simple Reranker
153
+ # ------------------------------------------------
154
+ def rerank_simple(self, chunks: List[Dict], question: str) -> List[Dict]:
155
+ """
156
+ Simple reranking based on keyword overlap.
157
+ """
158
+ question_words = set(question.lower().split())
159
+
160
+ for chunk in chunks:
161
+ text_words = set(chunk["text"].lower().split())
162
+ overlap = len(question_words & text_words)
163
+ chunk["keyword_score"] = overlap
164
+
165
+ reranked = sorted(
166
+ chunks,
167
+ key=lambda x: (x.get("keyword_score", 0), -x.get("score", 0)),
168
+ reverse=True
169
+ )
170
+
171
+ return reranked
src/utils.py ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import json
3
+ from datetime import datetime
4
+ from pathlib import Path
5
+
6
+
7
+ def ensure_directories():
8
+ """Create necessary directories if they don't exist."""
9
+ Path("data/policies").mkdir(parents=True, exist_ok=True)
10
+ Path("logs").mkdir(parents=True, exist_ok=True)
11
+ Path("chroma_db").mkdir(parents=True, exist_ok=True)
12
+
13
+
14
+ def log_query(question, retrieved_chunks, response, prompt_type="improved"):
15
+ """Log query details to JSONL file."""
16
+ log_entry = {
17
+ "timestamp": datetime.now().isoformat(),
18
+ "question": question,
19
+ "prompt_type": prompt_type,
20
+ "num_chunks_retrieved": len(retrieved_chunks),
21
+ "chunks": [
22
+ {
23
+ "text": chunk["text"][:200] + "..." if len(chunk["text"]) > 200 else chunk["text"],
24
+ "metadata": chunk.get("metadata", {})
25
+ }
26
+ for chunk in retrieved_chunks
27
+ ],
28
+ "response": response
29
+ }
30
+
31
+ log_file = "logs/queries.jsonl"
32
+ with open(log_file, "a", encoding="utf-8") as f:
33
+ f.write(json.dumps(log_entry, ensure_ascii=False) + "\n")
34
+
35
+
36
+ def get_groq_api_key():
37
+ """Get Groq API key from environment."""
38
+ api_key = os.getenv("GROQ_API_KEY")
39
+ if not api_key:
40
+ raise ValueError("GROQ_API_KEY environment variable not set")
41
+ return api_key
42
+
43
+
44
+ def safe_json_parse(text):
45
+ """Safely parse JSON from LLM response."""
46
+ try:
47
+ # Try to find JSON in the response
48
+ start = text.find("{")
49
+ end = text.rfind("}") + 1
50
+ if start != -1 and end > start:
51
+ json_str = text[start:end]
52
+ return json.loads(json_str)
53
+ return None
54
+ except Exception:
55
+ return None
56
+
57
+
58
+ # ============================================================
59
+ # ⭐ NEW: Simple RAG Evaluation Metrics
60
+ # ============================================================
61
+
62
+ def evaluate_response(question: str, response: dict, prompt_type: str) -> dict:
63
+ """
64
+ Generate simple evaluation metrics for RAG output.
65
+
66
+ Metrics:
67
+ - Accuracy (basic heuristic)
68
+ - Groundedness (based on evidence presence)
69
+ - Hallucination Risk
70
+ - Prompt Version
71
+ """
72
+
73
+ answer = response.get("answer", "")
74
+ evidence = response.get("evidence", [])
75
+
76
+ # ---------------------------
77
+ # Accuracy (simple heuristic)
78
+ # ---------------------------
79
+ if isinstance(answer, str) and answer.startswith("I don't know"):
80
+ accuracy = "⚠️"
81
+ else:
82
+ accuracy = "✅"
83
+
84
+ # ---------------------------
85
+ # Groundedness
86
+ # ---------------------------
87
+ groundedness = "✅" if evidence else "⚠️"
88
+
89
+ # ---------------------------
90
+ # Hallucination Risk
91
+ # ---------------------------
92
+ if isinstance(answer, str) and answer.startswith("I don't know"):
93
+ hallucination = "LOW"
94
+ elif evidence:
95
+ hallucination = "LOW"
96
+ else:
97
+ hallucination = "MEDIUM"
98
+
99
+ evaluation = {
100
+ "Accuracy": accuracy,
101
+ "Groundedness": groundedness,
102
+ "Hallucination Risk": hallucination,
103
+ "Prompt Version": prompt_type
104
+ }
105
+
106
+ return evaluation
src/vectorstore.py ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import chromadb
2
+ from chromadb.config import Settings
3
+ from sentence_transformers import SentenceTransformer
4
+ from typing import List
5
+
6
+
7
+ class VectorStore:
8
+ """Simple ChromaDB wrapper for document storage and retrieval."""
9
+
10
+ def __init__(self, collection_name: str = "policy_docs", persist_directory: str = "./chroma_db"):
11
+ """Initialize ChromaDB and embedding model."""
12
+ self.client = chromadb.PersistentClient(
13
+ path=persist_directory,
14
+ settings=Settings(anonymized_telemetry=False)
15
+ )
16
+
17
+ self.embedding_model = SentenceTransformer("all-MiniLM-L6-v2")
18
+ self.collection_name = collection_name
19
+
20
+ # Get or create collection
21
+ self.collection = self.client.get_or_create_collection(
22
+ name=collection_name,
23
+ metadata={"hnsw:space": "cosine"}
24
+ )
25
+
26
+ def add_documents(self, documents: List[dict]):
27
+ """
28
+ Add documents to the vector store.
29
+
30
+ Args:
31
+ documents: List of dicts with 'text' and 'metadata' keys
32
+ """
33
+ if not documents:
34
+ print("No documents to add")
35
+ return
36
+
37
+ texts = [doc["text"] for doc in documents]
38
+ metadatas = [doc.get("metadata", {}) for doc in documents]
39
+ ids = [f"doc_{i}" for i in range(len(documents))]
40
+
41
+ # Generate embeddings
42
+ embeddings = self.embedding_model.encode(texts).tolist()
43
+
44
+ # Add to ChromaDB
45
+ self.collection.add(
46
+ embeddings=embeddings,
47
+ documents=texts,
48
+ metadatas=metadatas,
49
+ ids=ids
50
+ )
51
+
52
+ print(f"Added {len(documents)} chunks to vector store")
53
+
54
+ def search(self, query: str, top_k: int = 5) -> List[dict]:
55
+ """
56
+ Search for relevant documents.
57
+
58
+ Returns:
59
+ List of dicts with 'text', 'metadata', and 'score' keys
60
+ """
61
+ # Generate query embedding
62
+ query_embedding = self.embedding_model.encode([query]).tolist()
63
+
64
+ # Search
65
+ results = self.collection.query(
66
+ query_embeddings=query_embedding,
67
+ n_results=top_k
68
+ )
69
+
70
+ # Format results
71
+ documents = []
72
+ if results["documents"] and results["documents"][0]:
73
+ for i, doc in enumerate(results["documents"][0]):
74
+ documents.append({
75
+ "text": doc,
76
+ "metadata": results["metadatas"][0][i] if results["metadatas"] else {},
77
+ "score": results["distances"][0][i] if results["distances"] else 0
78
+ })
79
+
80
+ return documents
81
+
82
+ def reset(self):
83
+ """Delete and recreate the collection."""
84
+ self.client.delete_collection(self.collection_name)
85
+ self.collection = self.client.create_collection(
86
+ name=self.collection_name,
87
+ metadata={"hnsw:space": "cosine"}
88
+ )
89
+ print("Vector store reset")
90
+
91
+ def count(self) -> int:
92
+ """Get count of documents in collection."""
93
+ return self.collection.count()