danjung9 commited on
Commit
aa4411c
·
verified ·
1 Parent(s): e7cf495

Upload 16 files

Browse files
.gitattributes CHANGED
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ assets/requirementsassistant.png filter=lfs diff=lfs merge=lfs -text
37
+ FIA[[:space:]]2025[[:space:]]Formula[[:space:]]1[[:space:]]Sporting[[:space:]]Regulations[[:space:]]-[[:space:]]Issue[[:space:]]5[[:space:]]-[[:space:]]2025-04-30.pdf filter=lfs diff=lfs merge=lfs -text
FIA 2025 Formula 1 Sporting Regulations - Issue 5 - 2025-04-30.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:525eef22a60f0755a4468281dd7c78c5ea5bd0eec38dc1b09cc31a0c0132854e
3
+ size 1294826
README.md CHANGED
@@ -1,13 +1,38 @@
1
- ---
2
- title: Requirements Management
3
- emoji: 💻
4
- colorFrom: green
5
- colorTo: green
6
- sdk: gradio
7
- sdk_version: 6.0.1
8
- app_file: app.py
9
- pinned: false
10
- short_description: Automating Requirements Management
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Requirements Assistant (Gradio + RAG)
2
+
3
+ Interactive requirements assistant built with Gradio. Upload a requirements/spec document, ask questions, and the app will retrieve context and either draft a Jira-style ticket or produce a compliance matrix using a Qwen model on OpenRouter. Local sentence-transformer embeddings plus ChromaDB keep everything in-memory for fast, lightweight retrieval.
4
+
5
+ ## Features
6
+ - Gradio UI with multi-conversation history.
7
+ - File upload (.txt/.md/.json/.csv/.pdf) with on-the-fly chunking into an in-memory ChromaDB collection.
8
+ - Simple RAG summarizer that routes to one of two agents:
9
+ - Jira ticket generator (JSON shape).
10
+ - Compliance matrix generator (markdown table).
11
+ - OpenRouter model calls (defaults to `qwen/qwen3-4b:free`) with streaming responses.
12
+
13
+ ## Prerequisites
14
+ - Python 3.10+ recommended.
15
+ - pip for installing dependencies.
16
+
17
+ ## Setup
18
+ 1. (Optional) Create/activate a virtual environment.
19
+ 2. Install dependencies:
20
+ ```bash
21
+ pip install -r requirements.txt
22
+ ```
23
+ 3. Provide an API key (preferred: OpenRouter):
24
+ - Create a `.env` alongside `app.py` (auto-loaded on startup):
25
+ ```
26
+ OPENROUTER_API_KEY=sk-or-...
27
+ ```
28
+ `OPENAI_API_KEY` is also accepted as a fallback.
29
+
30
+ ## Run the app
31
+ ```bash
32
+ python app.py
33
+ ```
34
+ Gradio will print a local URL. Open it, start a new conversation, optionally upload a file, and ask your question.
35
+
36
+ ## Notes
37
+ - Embeddings use `zacCMU/miniLM2-ENG3` and store data in an in-memory Chroma collection; restart clears uploaded context.
38
+ - If you see 429 rate limits from OpenRouter’s free pool, add your own key or switch to a different model in `pipelines/requirements_pipe.py`.
app.py ADDED
@@ -0,0 +1,1017 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+ import uuid
3
+ import time
4
+ import os
5
+ import gradio as gr
6
+ import modelscope_studio.components.antd as antd
7
+ import modelscope_studio.components.antdx as antdx
8
+ import modelscope_studio.components.base as ms
9
+ import modelscope_studio.components.pro as pro
10
+ from config import DEFAULT_LOCALE, DEFAULT_SETTINGS, DEFAULT_THEME, DEFAULT_SUGGESTIONS, save_history, user_config, bot_config, welcome_config, api_key
11
+ from ui_components.logo import Logo
12
+ from ui_components.settings_header import SettingsHeader
13
+ from ui_components.thinking_button import ThinkingButton
14
+ from pipelines.requirements_pipe import (
15
+ RAGModel as RequirementsRAGModel,
16
+ Router as RequirementsRouter,
17
+ RequirementsPipeline,
18
+ JiraAgent,
19
+ ComplianceMatrixAgent,
20
+ )
21
+ from pypdf import PdfReader
22
+
23
+ ## RAG dependencies
24
+ import chromadb
25
+ from sentence_transformers import SentenceTransformer
26
+
27
+ # Global RAG variables (defined before Gradio_Events)
28
+ RAG_COLLECTION = None
29
+ RAG_EMBEDDER = None
30
+ RAG_N_RESULTS = 3
31
+ RAG_MODEL_ID = "zacCMU/miniLM2-ENG3"
32
+ RAG_COLLECTION = None
33
+ RAG_EMBEDDER = None
34
+ client = None
35
+ REQUIREMENTS_PIPELINE = None
36
+
37
+
38
+ def load_env_file(env_path: str | None = None):
39
+ """
40
+ Lightweight .env loader to populate os.environ if keys are missing.
41
+ Falls back to the .env that lives next to this file so launching from
42
+ another working directory still picks up keys.
43
+ """
44
+ candidate_paths = []
45
+ if env_path:
46
+ candidate_paths.append(env_path)
47
+ else:
48
+ base_dir = os.path.dirname(os.path.abspath(__file__))
49
+ candidate_paths.append(os.path.join(base_dir, ".env"))
50
+ candidate_paths.append(".env")
51
+
52
+ for path in candidate_paths:
53
+ if not os.path.exists(path):
54
+ continue
55
+ try:
56
+ with open(path, "r", encoding="utf-8") as f:
57
+ for line in f:
58
+ line = line.strip()
59
+ if not line or line.startswith("#") or "=" not in line:
60
+ continue
61
+ key, value = line.split("=", 1)
62
+ if key and key not in os.environ:
63
+ os.environ[key] = value
64
+ print(f"Loaded environment variables from {path}")
65
+ return
66
+ except Exception as exc:
67
+ print(f"Warning: failed to load {path}: {exc}")
68
+
69
+ # Load .env early so API keys (e.g., OPENROUTER_API_KEY) are available.
70
+ load_env_file()
71
+ # Basic sanity check so missing keys are obvious in logs.
72
+ if not (os.getenv("OPENROUTER_API_KEY") or os.getenv("OPENAI_API_KEY")):
73
+ print("Warning: OPENROUTER_API_KEY / OPENAI_API_KEY not set; OpenRouter calls will fail.")
74
+
75
+ MAX_CONTEXT_FILE_SIZE = 2 * 1024 * 1024 # 2 MB
76
+ MAX_CONTEXT_FILE_CHARACTERS = 6000
77
+ SUPPORTED_CONTEXT_FILE_EXTENSIONS = {".txt", ".md", ".json", ".csv", ".pdf"}
78
+
79
+
80
+ def _extract_uploaded_file_path(file_reference):
81
+ if not file_reference:
82
+ return None
83
+ if isinstance(file_reference, list):
84
+ if not file_reference:
85
+ return None
86
+ return _extract_uploaded_file_path(file_reference[0])
87
+ if isinstance(file_reference, str):
88
+ return file_reference
89
+ if isinstance(file_reference, dict):
90
+ return file_reference.get("name") or file_reference.get("path")
91
+ if hasattr(file_reference, "name"):
92
+ return getattr(file_reference, "name")
93
+ return None
94
+
95
+
96
+ def load_context_file(file_reference):
97
+ file_path = _extract_uploaded_file_path(file_reference)
98
+ if not file_path or not os.path.exists(file_path):
99
+ raise gr.Error("Unable to read the uploaded file.")
100
+
101
+ file_size = os.path.getsize(file_path)
102
+ if file_size > MAX_CONTEXT_FILE_SIZE:
103
+ raise gr.Error(
104
+ "File too large. Limit is 2 MB.")
105
+
106
+ _, ext = os.path.splitext(file_path)
107
+ if ext and ext.lower() not in SUPPORTED_CONTEXT_FILE_EXTENSIONS:
108
+ allowed = ", ".join(sorted(SUPPORTED_CONTEXT_FILE_EXTENSIONS))
109
+ raise gr.Error(
110
+ f"Unsupported file type. Allowed: {allowed}")
111
+
112
+ content = ""
113
+ if ext.lower() == ".pdf":
114
+ try:
115
+ reader = PdfReader(file_path)
116
+ text_parts = []
117
+ for page in reader.pages:
118
+ text_parts.append(page.extract_text() or "")
119
+ content = "\n".join(text_parts)
120
+ except Exception as exc:
121
+ raise gr.Error(f"Unable to read PDF: {exc}")
122
+ else:
123
+ with open(file_path, "r", encoding="utf-8", errors="ignore") as f:
124
+ content = f.read()
125
+ truncated = len(content) > MAX_CONTEXT_FILE_CHARACTERS
126
+ content = content[:MAX_CONTEXT_FILE_CHARACTERS].strip()
127
+ # when uploaded add it to chromadb to!
128
+ add_documents_to_collection(collection=RAG_COLLECTION, docs=content)
129
+
130
+ return {
131
+ "name": os.path.basename(file_path),
132
+ "size": file_size,
133
+ "content": content,
134
+ "truncated": truncated
135
+ }
136
+
137
+
138
+ def resolve_uploaded_file(uploaded_file_value, state_value):
139
+ conversation_id = state_value.get("conversation_id")
140
+ previous_settings = {}
141
+ if conversation_id:
142
+ previous_settings = state_value["conversation_contexts"].get(
143
+ conversation_id, {}).get("settings", {})
144
+ if uploaded_file_value:
145
+ return load_context_file(uploaded_file_value)
146
+ return previous_settings.get("uploaded_file")
147
+
148
+
149
+ def format_file_status(uploaded_file):
150
+ if not uploaded_file:
151
+ return "No file uploaded"
152
+ size_kb = uploaded_file.get("size", 0) / 1024
153
+ size_suffix = f" (~{size_kb:.1f} KB)" if size_kb else ""
154
+ status = f"Using file: {uploaded_file.get('name', 'file')}{size_suffix}"
155
+ if uploaded_file.get("truncated"):
156
+ status += " (content truncated)"
157
+ return status
158
+
159
+
160
+ def format_history(history, sys_prompt, uploaded_file=None):
161
+ messages = []
162
+ system_sections = []
163
+ if sys_prompt:
164
+ system_sections.append(sys_prompt)
165
+ if uploaded_file and uploaded_file.get("content"):
166
+ file_section = (
167
+ f"Reference file ({uploaded_file.get('name', 'file')}):\n"
168
+ f"{uploaded_file.get('content', '')}")
169
+ if uploaded_file.get("truncated"):
170
+ file_section += (
171
+ "\n\n[File content truncated to the first "
172
+ f"{MAX_CONTEXT_FILE_CHARACTERS} characters.]")
173
+ system_sections.append(file_section)
174
+ if system_sections:
175
+ messages.append({
176
+ "role": "system",
177
+ "content": "\n\n".join(system_sections)
178
+ })
179
+ for item in history:
180
+ if item["role"] == "user":
181
+ messages.append({"role": "user", "content": item["content"]})
182
+ elif item["role"] == "assistant":
183
+ contents = [{
184
+ "type": "text",
185
+ "text": content["content"]
186
+ } for content in item["content"] if content["type"] == "text"]
187
+ messages.append({
188
+ "role":
189
+ "assistant",
190
+ "content":
191
+ contents[0]["text"] if len(contents) > 0 else ""
192
+ })
193
+ return messages
194
+
195
+
196
+ class Gradio_Events:
197
+
198
+ @staticmethod
199
+ def submit(state_value):
200
+
201
+ history = state_value["conversation_contexts"][
202
+ state_value["conversation_id"]]["history"]
203
+ settings = state_value["conversation_contexts"][
204
+ state_value["conversation_id"]]["settings"]
205
+ enable_thinking = state_value["conversation_contexts"][
206
+ state_value["conversation_id"]]["enable_thinking"]
207
+ model = settings.get("model")
208
+ messages = format_history(history,
209
+ sys_prompt=settings.get("sys_prompt", ""),
210
+ uploaded_file=settings.get("uploaded_file"))
211
+
212
+ history.append({
213
+ "role":
214
+ "assistant",
215
+ "content": [],
216
+ "key":
217
+ str(uuid.uuid4()),
218
+ "header":
219
+ "Response",
220
+ "loading":
221
+ True,
222
+ "status":
223
+ "pending"
224
+ })
225
+
226
+ yield {
227
+ chatbot: gr.update(value=history),
228
+ state: gr.update(value=state_value),
229
+ }
230
+ try:
231
+ pipeline = ensure_pipeline_initialized()
232
+
233
+ response = pipeline.stream(messages=messages)
234
+ start_time = time.time()
235
+ reasoning_content = ""
236
+ answer_content = ""
237
+ is_thinking = False
238
+ is_answering = False
239
+ contents = [None, None]
240
+ for chunk in response:
241
+ delta = chunk.output.choices[0].message
242
+ delta_content = (getattr(delta, "content", None)
243
+ if not isinstance(delta, dict) else delta.get("content"))
244
+ delta_reason = (getattr(delta, "reasoning_content", None)
245
+ if not isinstance(delta, dict) else delta.get("reasoning_content"))
246
+
247
+ if (not delta_content) and (not delta_reason):
248
+ pass
249
+ else:
250
+ if delta_reason:
251
+ if not is_thinking:
252
+ contents[0] = {
253
+ "type": "tool",
254
+ "content": "",
255
+ "options": {
256
+ "title": "Thinking...",
257
+ "status": "pending"
258
+ },
259
+ "copyable": False,
260
+ "editable": False
261
+ }
262
+ is_thinking = True
263
+ reasoning_content += delta_reason
264
+ if delta_content:
265
+ if not is_answering:
266
+ thought_cost_time = "{:.2f}".format(time.time() -
267
+ start_time)
268
+ if contents[0]:
269
+ contents[0]["options"]["title"] = f"End of Thought ({thought_cost_time}s)"
270
+ contents[0]["options"]["status"] = "done"
271
+ contents[1] = {
272
+ "type": "text",
273
+ "content": "",
274
+ }
275
+
276
+ is_answering = True
277
+ answer_content += delta_content
278
+
279
+ if contents[0]:
280
+ contents[0]["content"] = reasoning_content
281
+ if contents[1]:
282
+ contents[1]["content"] = answer_content
283
+ history[-1]["content"] = [
284
+ content for content in contents if content
285
+ ]
286
+
287
+ history[-1]["loading"] = False
288
+ yield {
289
+ chatbot: gr.update(value=history),
290
+ state: gr.update(value=state_value)
291
+ }
292
+ print("model: ", model, "-", "reasoning_content: ",
293
+ reasoning_content, "\n", "content: ", answer_content)
294
+ history[-1]["status"] = "done"
295
+ cost_time = "{:.2f}".format(time.time() - start_time)
296
+ history[-1]["footer"] = f"{cost_time}s"
297
+ yield {
298
+ chatbot: gr.update(value=history),
299
+ state: gr.update(value=state_value),
300
+ }
301
+ except Exception as e:
302
+ print("model: ", model, "-", "Error: ", e)
303
+ history[-1]["loading"] = False
304
+ history[-1]["status"] = "done"
305
+ history[-1]["content"] += [{
306
+ "type":
307
+ "text",
308
+ "content":
309
+ f'<span style="color: var(--color-red-500)">{str(e)}</span>'
310
+ }]
311
+ yield {
312
+ chatbot: gr.update(value=history),
313
+ state: gr.update(value=state_value)
314
+ }
315
+ return
316
+
317
+ @staticmethod
318
+ def add_message(input_value, settings_form_value, thinking_btn_state_value,
319
+ uploaded_file_value, state_value):
320
+ if not state_value["conversation_id"]:
321
+ random_id = str(uuid.uuid4())
322
+ history = []
323
+ state_value["conversation_id"] = random_id
324
+ state_value["conversation_contexts"][
325
+ state_value["conversation_id"]] = {
326
+ "history": history
327
+ }
328
+ state_value["conversations"].append({
329
+ "label": input_value,
330
+ "key": random_id
331
+ })
332
+
333
+ history = state_value["conversation_contexts"][
334
+ state_value["conversation_id"]]["history"]
335
+
336
+ uploaded_file = resolve_uploaded_file(uploaded_file_value,
337
+ state_value)
338
+
339
+ state_value["conversation_contexts"][
340
+ state_value["conversation_id"]] = {
341
+ "history": history,
342
+ "settings": {
343
+ **settings_form_value,
344
+ "uploaded_file": uploaded_file
345
+ },
346
+ "enable_thinking": thinking_btn_state_value["enable_thinking"]
347
+ }
348
+ history.append({
349
+ "role": "user",
350
+ "content": input_value,
351
+ "key": str(uuid.uuid4())
352
+ })
353
+ yield Gradio_Events.preprocess_submit(clear_input=True)(state_value)
354
+
355
+ try:
356
+ for chunk in Gradio_Events.submit(state_value):
357
+ yield chunk
358
+ except Exception as e:
359
+ raise e
360
+ finally:
361
+ yield Gradio_Events.postprocess_submit(state_value)
362
+
363
+ @staticmethod
364
+ def preprocess_submit(clear_input=True):
365
+
366
+ def preprocess_submit_handler(state_value):
367
+ history = state_value["conversation_contexts"][
368
+ state_value["conversation_id"]]["history"]
369
+ return {
370
+ **({
371
+ input:
372
+ gr.update(value=None, loading=True) if clear_input else gr.update(loading=True),
373
+ } if clear_input else {}),
374
+ conversations:
375
+ gr.update(active_key=state_value["conversation_id"],
376
+ items=list(
377
+ map(
378
+ lambda item: {
379
+ **item,
380
+ "disabled":
381
+ True if item["key"] != state_value[
382
+ "conversation_id"] else False,
383
+ }, state_value["conversations"]))),
384
+ add_conversation_btn:
385
+ gr.update(disabled=True),
386
+ clear_btn:
387
+ gr.update(disabled=True),
388
+ conversation_delete_menu_item:
389
+ gr.update(disabled=True),
390
+ chatbot:
391
+ gr.update(value=history,
392
+ bot_config=bot_config(
393
+ disabled_actions=['edit', 'retry', 'delete']),
394
+ user_config=user_config(
395
+ disabled_actions=['edit', 'delete'])),
396
+ state:
397
+ gr.update(value=state_value),
398
+ }
399
+
400
+ return preprocess_submit_handler
401
+
402
+ @staticmethod
403
+ def postprocess_submit(state_value):
404
+ history = state_value["conversation_contexts"][
405
+ state_value["conversation_id"]]["history"]
406
+ return {
407
+ input:
408
+ gr.update(loading=False),
409
+ conversation_delete_menu_item:
410
+ gr.update(disabled=False),
411
+ clear_btn:
412
+ gr.update(disabled=False),
413
+ conversations:
414
+ gr.update(items=state_value["conversations"]),
415
+ add_conversation_btn:
416
+ gr.update(disabled=False),
417
+ chatbot:
418
+ gr.update(value=history,
419
+ bot_config=bot_config(),
420
+ user_config=user_config()),
421
+ state:
422
+ gr.update(value=state_value),
423
+ }
424
+
425
+ @staticmethod
426
+ def cancel(state_value):
427
+ history = state_value["conversation_contexts"][
428
+ state_value["conversation_id"]]["history"]
429
+ history[-1]["loading"] = False
430
+ history[-1]["status"] = "done"
431
+ history[-1]["footer"] = "Chat completion paused"
432
+ return Gradio_Events.postprocess_submit(state_value)
433
+
434
+ @staticmethod
435
+ def delete_message(state_value, e: gr.EventData):
436
+ index = e._data["payload"][0]["index"]
437
+ history = state_value["conversation_contexts"][
438
+ state_value["conversation_id"]]["history"]
439
+ history = history[:index] + history[index + 1:]
440
+
441
+ state_value["conversation_contexts"][
442
+ state_value["conversation_id"]]["history"] = history
443
+
444
+ return gr.update(value=state_value)
445
+
446
+ @staticmethod
447
+ def edit_message(state_value, chatbot_value, e: gr.EventData):
448
+ index = e._data["payload"][0]["index"]
449
+ history = state_value["conversation_contexts"][
450
+ state_value["conversation_id"]]["history"]
451
+ history[index]["content"] = chatbot_value[index]["content"]
452
+ return gr.update(value=state_value)
453
+
454
+ @staticmethod
455
+ def regenerate_message(settings_form_value, thinking_btn_state_value,
456
+ uploaded_file_value, state_value, e: gr.EventData):
457
+ index = e._data["payload"][0]["index"]
458
+ history = state_value["conversation_contexts"][
459
+ state_value["conversation_id"]]["history"]
460
+ history = history[:index]
461
+
462
+ uploaded_file = resolve_uploaded_file(uploaded_file_value,
463
+ state_value)
464
+
465
+ state_value["conversation_contexts"][
466
+ state_value["conversation_id"]] = {
467
+ "history": history,
468
+ "settings": {
469
+ **settings_form_value,
470
+ "uploaded_file": uploaded_file
471
+ },
472
+ "enable_thinking": thinking_btn_state_value["enable_thinking"]
473
+ }
474
+
475
+ yield Gradio_Events.preprocess_submit()(state_value)
476
+ try:
477
+ for chunk in Gradio_Events.submit(state_value):
478
+ yield chunk
479
+ except Exception as e:
480
+ raise e
481
+ finally:
482
+ yield Gradio_Events.postprocess_submit(state_value)
483
+
484
+ @staticmethod
485
+ def select_suggestion(input_value, e: gr.EventData):
486
+ input_value = input_value[:-1] + e._data["payload"][0]
487
+ return gr.update(value=input_value)
488
+
489
+ @staticmethod
490
+ def apply_prompt(e: gr.EventData):
491
+ return gr.update(value=e._data["payload"][0]["value"]["description"])
492
+
493
+ @staticmethod
494
+ def new_chat(thinking_btn_state, state_value):
495
+ if not state_value["conversation_id"]:
496
+ return gr.skip()
497
+ state_value["conversation_id"] = ""
498
+ thinking_btn_state["enable_thinking"] = True
499
+ return (
500
+ gr.update(active_key=state_value["conversation_id"]),
501
+ gr.update(value=None),
502
+ gr.update(value={**DEFAULT_SETTINGS}),
503
+ gr.update(value=None),
504
+ gr.update(value=format_file_status(None)),
505
+ gr.update(value=thinking_btn_state),
506
+ gr.update(value=state_value),
507
+ )
508
+
509
+ @staticmethod
510
+ def select_conversation(thinking_btn_state_value, state_value,
511
+ e: gr.EventData):
512
+ active_key = e._data["payload"][0]
513
+ if state_value["conversation_id"] == active_key or (
514
+ active_key not in state_value["conversation_contexts"]):
515
+ return gr.skip()
516
+ state_value["conversation_id"] = active_key
517
+ conversation = state_value["conversation_contexts"][active_key]
518
+ thinking_btn_state_value["enable_thinking"] = conversation[
519
+ "enable_thinking"]
520
+ settings = conversation.get("settings") or {**DEFAULT_SETTINGS}
521
+ return (
522
+ gr.update(active_key=active_key),
523
+ gr.update(value=conversation["history"]),
524
+ gr.update(value=settings),
525
+ gr.update(value=None),
526
+ gr.update(value=format_file_status(settings.get("uploaded_file"))),
527
+ gr.update(value=thinking_btn_state_value),
528
+ gr.update(value=state_value),
529
+ )
530
+
531
+ @staticmethod
532
+ def click_conversation_menu(state_value, e: gr.EventData):
533
+ conversation_id = e._data["payload"][0]["key"]
534
+ operation = e._data["payload"][1]["key"]
535
+ if operation == "delete":
536
+ del state_value["conversation_contexts"][conversation_id]
537
+
538
+ state_value["conversations"] = [
539
+ item for item in state_value["conversations"]
540
+ if item["key"] != conversation_id
541
+ ]
542
+
543
+ if state_value["conversation_id"] == conversation_id:
544
+ state_value["conversation_id"] = ""
545
+ return (
546
+ gr.update(items=state_value["conversations"],
547
+ active_key=state_value["conversation_id"]),
548
+ gr.update(value=None),
549
+ gr.update(value=None),
550
+ gr.update(value=format_file_status(None)),
551
+ gr.update(value=state_value),
552
+ )
553
+ else:
554
+ return (
555
+ gr.update(items=state_value["conversations"]),
556
+ gr.skip(),
557
+ gr.skip(),
558
+ gr.skip(),
559
+ gr.update(value=state_value),
560
+ )
561
+ return gr.skip()
562
+
563
+ @staticmethod
564
+ def toggle_settings_header(settings_header_state_value):
565
+ settings_header_state_value[
566
+ "open"] = not settings_header_state_value["open"]
567
+ return gr.update(value=settings_header_state_value)
568
+
569
+ @staticmethod
570
+ def clear_conversation_history(state_value):
571
+ if not state_value["conversation_id"]:
572
+ return gr.skip()
573
+ state_value["conversation_contexts"][
574
+ state_value["conversation_id"]]["history"] = []
575
+ return gr.update(value=None), gr.update(value=state_value)
576
+
577
+ @staticmethod
578
+ def update_browser_state(state_value):
579
+
580
+ return gr.update(value=dict(
581
+ conversations=state_value["conversations"],
582
+ conversation_contexts=state_value["conversation_contexts"]))
583
+
584
+ @staticmethod
585
+ def apply_browser_state(browser_state_value, state_value):
586
+ state_value["conversations"] = browser_state_value["conversations"]
587
+ state_value["conversation_contexts"] = browser_state_value[
588
+ "conversation_contexts"]
589
+ return gr.update(
590
+ items=browser_state_value["conversations"]), gr.update(
591
+ value=state_value)
592
+
593
+ @staticmethod
594
+ def preview_uploaded_file(uploaded_file_value):
595
+ if not uploaded_file_value:
596
+ return gr.update(value=format_file_status(None))
597
+ uploaded_file = load_context_file(uploaded_file_value)
598
+ return gr.update(value=format_file_status(uploaded_file))
599
+
600
+ @staticmethod
601
+ def remove_uploaded_file(state_value):
602
+ conversation_id = state_value.get("conversation_id")
603
+ if conversation_id and conversation_id in state_value[
604
+ "conversation_contexts"]:
605
+ state_value["conversation_contexts"][conversation_id].setdefault(
606
+ "settings", {**DEFAULT_SETTINGS})
607
+ state_value["conversation_contexts"][conversation_id]["settings"][
608
+ "uploaded_file"] = None
609
+ return gr.update(value=None), gr.update(
610
+ value=format_file_status(None)), gr.update(value=state_value)
611
+
612
+
613
+ css = """
614
+ .gradio-container {
615
+ padding: 0 !important;
616
+ }
617
+
618
+ .gradio-container > main.fillable {
619
+ padding: 0 !important;
620
+ }
621
+
622
+ #chatbot {
623
+ height: calc(100vh - 21px - 16px);
624
+ max-height: 1500px;
625
+ }
626
+
627
+ #chatbot .chatbot-conversations {
628
+ height: 100vh;
629
+ background-color: var(--ms-gr-ant-color-bg-layout);
630
+ padding-left: 4px;
631
+ padding-right: 4px;
632
+ }
633
+
634
+
635
+ #chatbot .chatbot-conversations .chatbot-conversations-list {
636
+ padding-left: 0;
637
+ padding-right: 0;
638
+ }
639
+
640
+ #chatbot .chatbot-chat {
641
+ padding: 32px;
642
+ padding-bottom: 0;
643
+ height: 100%;
644
+ }
645
+
646
+ @media (max-width: 768px) {
647
+ #chatbot .chatbot-chat {
648
+ padding: 0;
649
+ }
650
+ }
651
+
652
+ #chatbot .chatbot-chat .chatbot-chat-messages {
653
+ flex: 1;
654
+ }
655
+
656
+
657
+ #chatbot .setting-form-thinking-budget .ms-gr-ant-form-item-control-input-content {
658
+ display: flex;
659
+ flex-wrap: wrap;
660
+ }
661
+
662
+ #chatbot .setting-form-file-upload input[type="file"] {
663
+ padding: 4px;
664
+ }
665
+
666
+ #chatbot .setting-form-file-status {
667
+ font-size: 12px;
668
+ color: var(--ms-gr-ant-color-text-tertiary);
669
+ margin-top: 4px;
670
+ }
671
+ """
672
+
673
+ with gr.Blocks(css=css, fill_width=True) as demo:
674
+ state = gr.State({
675
+ "conversation_contexts": {},
676
+ "conversations": [],
677
+ "conversation_id": "",
678
+ })
679
+
680
+ with ms.Application(), antdx.XProvider(
681
+ theme=DEFAULT_THEME, locale=DEFAULT_LOCALE), ms.AutoLoading():
682
+ with antd.Row(gutter=[20, 20], wrap=False, elem_id="chatbot"):
683
+ # Left Column
684
+ with antd.Col(md=dict(flex="0 0 260px", span=24, order=0),
685
+ span=0,
686
+ elem_style=dict(width=0),
687
+ order=1):
688
+ with ms.Div(elem_classes="chatbot-conversations"):
689
+ with antd.Flex(vertical=True,
690
+ gap="small",
691
+ elem_style=dict(height="100%")):
692
+ # Logo
693
+ Logo()
694
+
695
+ # New Conversation Button
696
+ with antd.Button(value=None,
697
+ color="primary",
698
+ variant="filled",
699
+ block=True) as add_conversation_btn:
700
+ ms.Text("New Conversation")
701
+ with ms.Slot("icon"):
702
+ antd.Icon("PlusOutlined")
703
+
704
+ # Conversations List
705
+ with antdx.Conversations(
706
+ elem_classes="chatbot-conversations-list",
707
+ ) as conversations:
708
+ with ms.Slot('menu.items'):
709
+ with antd.Menu.Item(
710
+ label="Delete", key="delete",
711
+ danger=True
712
+ ) as conversation_delete_menu_item:
713
+ with ms.Slot("icon"):
714
+ antd.Icon("DeleteOutlined")
715
+ # Right Column
716
+ with antd.Col(flex=1, elem_style=dict(height="100%")):
717
+ with antd.Flex(vertical=True,
718
+ gap="small",
719
+ elem_classes="chatbot-chat"):
720
+ # Chatbot
721
+ chatbot = pro.Chatbot(elem_classes="chatbot-chat-messages",
722
+ height=0,
723
+ welcome_config=welcome_config(),
724
+ user_config=user_config(),
725
+ bot_config=bot_config())
726
+
727
+ # Input
728
+ with antdx.Suggestion(
729
+ items=DEFAULT_SUGGESTIONS,
730
+ # onKeyDown Handler in Javascript
731
+ should_trigger="""(e, { onTrigger, onKeyDown }) => {
732
+ switch(e.key) {
733
+ case '/':
734
+ onTrigger()
735
+ break
736
+ case 'ArrowRight':
737
+ case 'ArrowLeft':
738
+ case 'ArrowUp':
739
+ case 'ArrowDown':
740
+ break;
741
+ default:
742
+ onTrigger(false)
743
+ }
744
+ onKeyDown(e)
745
+ }""") as suggestion:
746
+ with ms.Slot("children"):
747
+ with antdx.Sender(placeholder="Enter \"/\" to get suggestions") as input:
748
+ with ms.Slot("header"):
749
+ settings_header_state, settings_form, context_file, file_status, remove_file_btn = SettingsHeader(
750
+ )
751
+ with ms.Slot("prefix"):
752
+ with antd.Flex(
753
+ gap=4,
754
+ wrap=True,
755
+ elem_style=dict(maxWidth='40vw')):
756
+ with antd.Button(
757
+ value=None,
758
+ type="text") as setting_btn:
759
+ with ms.Slot("icon"):
760
+ antd.Icon("SettingOutlined")
761
+ with antd.Button(
762
+ value=None,
763
+ type="text") as clear_btn:
764
+ with ms.Slot("icon"):
765
+ antd.Icon("ClearOutlined")
766
+ thinking_btn_state = ThinkingButton()
767
+
768
+ # Events Handler
769
+ # Browser State Handler
770
+ if save_history:
771
+ browser_state = gr.BrowserState(
772
+ {
773
+ "conversation_contexts": {},
774
+ "conversations": [],
775
+ },
776
+ storage_key="chat_demo_storage")
777
+ state.change(fn=Gradio_Events.update_browser_state,
778
+ inputs=[state],
779
+ outputs=[browser_state])
780
+
781
+ demo.load(fn=Gradio_Events.apply_browser_state,
782
+ inputs=[browser_state, state],
783
+ outputs=[conversations, state])
784
+
785
+ # Conversations Handler
786
+ add_conversation_btn.click(fn=Gradio_Events.new_chat,
787
+ inputs=[thinking_btn_state, state],
788
+ outputs=[
789
+ conversations, chatbot, settings_form,
790
+ context_file, file_status,
791
+ thinking_btn_state, state
792
+ ])
793
+ conversations.active_change(fn=Gradio_Events.select_conversation,
794
+ inputs=[thinking_btn_state, state],
795
+ outputs=[
796
+ conversations, chatbot, settings_form,
797
+ context_file, file_status,
798
+ thinking_btn_state, state
799
+ ])
800
+ conversations.menu_click(fn=Gradio_Events.click_conversation_menu,
801
+ inputs=[state],
802
+ outputs=[
803
+ conversations, chatbot, context_file,
804
+ file_status, state
805
+ ])
806
+ # Chatbot Handler
807
+ chatbot.welcome_prompt_select(fn=Gradio_Events.apply_prompt,
808
+ outputs=[input])
809
+
810
+ chatbot.delete(fn=Gradio_Events.delete_message,
811
+ inputs=[state],
812
+ outputs=[state])
813
+ chatbot.edit(fn=Gradio_Events.edit_message,
814
+ inputs=[state, chatbot],
815
+ outputs=[state])
816
+
817
+ regenerating_event = chatbot.retry(
818
+ fn=Gradio_Events.regenerate_message,
819
+ inputs=[settings_form, thinking_btn_state, context_file, state],
820
+ outputs=[
821
+ input, clear_btn, conversation_delete_menu_item,
822
+ add_conversation_btn, conversations, chatbot, state
823
+ ])
824
+
825
+ # Input Handler
826
+ submit_event = input.submit(
827
+ fn=Gradio_Events.add_message,
828
+ inputs=[input, settings_form, thinking_btn_state, context_file, state],
829
+ outputs=[
830
+ input, clear_btn, conversation_delete_menu_item,
831
+ add_conversation_btn, conversations, chatbot, state
832
+ ])
833
+ input.cancel(fn=Gradio_Events.cancel,
834
+ inputs=[state],
835
+ outputs=[
836
+ input, conversation_delete_menu_item, clear_btn,
837
+ conversations, add_conversation_btn, chatbot, state
838
+ ],
839
+ cancels=[submit_event, regenerating_event],
840
+ queue=False)
841
+ # Input Actions Handler
842
+ setting_btn.click(fn=Gradio_Events.toggle_settings_header,
843
+ inputs=[settings_header_state],
844
+ outputs=[settings_header_state])
845
+ clear_btn.click(fn=Gradio_Events.clear_conversation_history,
846
+ inputs=[state],
847
+ outputs=[chatbot, state])
848
+ context_file.change(fn=Gradio_Events.preview_uploaded_file,
849
+ inputs=[context_file],
850
+ outputs=[file_status])
851
+ remove_file_btn.click(fn=Gradio_Events.remove_uploaded_file,
852
+ inputs=[state],
853
+ outputs=[context_file, file_status, state])
854
+ suggestion.select(fn=Gradio_Events.select_suggestion,
855
+ inputs=[input],
856
+ outputs=[input])
857
+
858
+
859
+ class CustomSBERTEmbeddingFunction(chromadb.EmbeddingFunction):
860
+ """
861
+ A custom wrapper to use a SentenceTransformer model as the embedding function
862
+ for ChromaDB, satisfying ChromaDB's interface requirements.
863
+ """
864
+ def __init__(self, model: SentenceTransformer):
865
+ self._model = model
866
+
867
+ def __call__(self, texts: list[str]) -> list[list[float]]:
868
+ # Outputs a list of lists of floats as ChromaDB expects
869
+ embeddings = self._model.encode(texts, convert_to_tensor=False).tolist()
870
+ return embeddings
871
+
872
+ def name(self) -> str:
873
+ return "custom_sbert_wrapper"
874
+
875
+
876
+ class ChromaRetriever:
877
+ """Thin wrapper to fetch top-n docs from ChromaDB."""
878
+
879
+ def __init__(self, collection: chromadb.api.models.Collection | None,
880
+ n_results: int = RAG_N_RESULTS):
881
+ self.collection = collection
882
+ self.n_results = n_results
883
+
884
+ def search(self, query: str) -> list[str]:
885
+ if not self.collection or not query:
886
+ return []
887
+ results = retrieve_documents(self.collection,
888
+ query=query,
889
+ n_results=self.n_results)
890
+ docs = results.get("documents") or []
891
+ if docs and isinstance(docs[0], list):
892
+ docs = docs[0]
893
+ return docs
894
+
895
+
896
+ class LocalSummarizer:
897
+ """Lightweight summarizer using retrieved context without external calls."""
898
+
899
+ def summarize(self, query: str, docs: list[str]) -> str:
900
+ context = "\n\n".join(docs) if docs else "No retrieved context."
901
+ return (
902
+ "Requirements summary (heuristic):\n"
903
+ f"Inquiry: {query}\n"
904
+ f"Context:\n{context}"
905
+ )
906
+
907
+
908
+ def add_documents_to_collection(collection: chromadb.Collection | None, docs: str):
909
+ """
910
+ Chunks a single document string and adds it to the ChromaDB collection.
911
+ """
912
+ if not collection:
913
+ print("RAG Collection is not initialized. Skipping document addition.")
914
+ return
915
+
916
+ chunks = split_document_into_chunks(docs)
917
+ if not chunks:
918
+ return
919
+
920
+ # Create unique IDs for each chunk
921
+ ids = [f"doc_{uuid.uuid4()}" for _ in range(len(chunks))]
922
+
923
+ try:
924
+ collection.add(
925
+ documents=chunks,
926
+ ids=ids,
927
+ # metadata can be added here, e.g., source file name
928
+ )
929
+ print(f"Added {len(chunks)} chunks to ChromaDB.")
930
+ except Exception as e:
931
+ print(f"Failed to add documents to ChromaDB: {e}")
932
+
933
+ def retrieve_documents(collection: chromadb.api.models.Collection | None,
934
+ query: str,
935
+ n_results: int = 5) -> dict:
936
+ """
937
+ Retrieves the top N relevant documents from the ChromaDB collection based on a query.
938
+ """
939
+ if not collection or not query:
940
+ return {"documents": [], "distances": []}
941
+ results = collection.query(
942
+ query_texts=[query],
943
+ n_results=n_results,
944
+ include=['documents', 'distances']
945
+ )
946
+ return results
947
+
948
+ def split_document_into_chunks(text: str, chunk_size=300, chunk_overlap=50) -> list[str]:
949
+ """Simple text splitting for RAG chunking."""
950
+ if not text:
951
+ return []
952
+
953
+ # A simplified chunking logic: split by sentence or paragraph and then group
954
+ # For robust splitting, consider libraries like LangChain's TextSplitters.
955
+
956
+ sentences = text.split(". ")
957
+ chunks = []
958
+ current_chunk = ""
959
+ for sentence in sentences:
960
+ if len(current_chunk) + len(sentence) > chunk_size and current_chunk:
961
+ chunks.append(current_chunk.strip())
962
+ current_chunk = sentence + ". "
963
+ else:
964
+ current_chunk += sentence + ". "
965
+ if current_chunk:
966
+ chunks.append(current_chunk.strip())
967
+
968
+ return chunks
969
+
970
+
971
+ def init_rag_if_needed():
972
+ """Initialize embedder and Chroma collection if not already set."""
973
+ global RAG_EMBEDDER, RAG_COLLECTION, client
974
+ if RAG_COLLECTION is not None and RAG_EMBEDDER is not None:
975
+ return
976
+ try:
977
+ RAG_EMBEDDER = SentenceTransformer(RAG_MODEL_ID)
978
+ custom_ef = CustomSBERTEmbeddingFunction(RAG_EMBEDDER)
979
+ client = chromadb.Client()
980
+ RAG_COLLECTION = client.get_or_create_collection(
981
+ name="engineering_corpus_rag",
982
+ embedding_function=custom_ef)
983
+ print("RAG initialized.")
984
+ except Exception as e:
985
+ print(f"FATAL RAG SETUP ERROR: {e}")
986
+ print("RAG functionality disabled.")
987
+ RAG_COLLECTION = None
988
+ RAG_EMBEDDER = None
989
+ client = None
990
+
991
+
992
+ def ensure_pipeline_initialized():
993
+ """Lazy-init the RAG -> router -> agent pipeline."""
994
+ global REQUIREMENTS_PIPELINE
995
+ if REQUIREMENTS_PIPELINE:
996
+ return REQUIREMENTS_PIPELINE
997
+ load_env_file()
998
+ init_rag_if_needed()
999
+ retriever = ChromaRetriever(RAG_COLLECTION, n_results=RAG_N_RESULTS)
1000
+ summarizer = LocalSummarizer()
1001
+ router = RequirementsRouter()
1002
+ jira_agent = JiraAgent()
1003
+ matrix_agent = ComplianceMatrixAgent()
1004
+ REQUIREMENTS_PIPELINE = RequirementsPipeline(
1005
+ rag_model=RequirementsRAGModel(retriever=retriever, llm=summarizer),
1006
+ router=router,
1007
+ jira_agent=jira_agent,
1008
+ matrix_agent=matrix_agent,
1009
+ )
1010
+ return REQUIREMENTS_PIPELINE
1011
+
1012
+ if __name__ == "__main__":
1013
+
1014
+ ensure_pipeline_initialized()
1015
+
1016
+ demo.queue(default_concurrency_limit=100,
1017
+ max_size=100).launch(ssr_mode=False, max_threads=100)
assets/requirementsassistant.png ADDED

Git LFS Details

  • SHA256: 68cf5b89cd7f27804ca0ea6d8b0fd2b3bad488d1549216b8200dab988e34de67
  • Pointer size: 132 Bytes
  • Size of remote file: 1.24 MB
config.py ADDED
@@ -0,0 +1,127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from modelscope_studio.components.pro.chatbot import ChatbotActionConfig, ChatbotBotConfig, ChatbotUserConfig, ChatbotWelcomeConfig
3
+
4
+ # Env
5
+ is_cn = os.getenv('MODELSCOPE_ENVIRONMENT') == 'studio'
6
+ api_key = os.getenv('API_KEY')
7
+ BASE_DIR = os.path.dirname(os.path.abspath(__file__))
8
+ ASSETS_DIR = os.path.join(BASE_DIR, "assets")
9
+ QWEN_LOGO_PATH = os.path.join(ASSETS_DIR, "requirementsassistant.png")
10
+
11
+
12
+ # Save history in browser
13
+ save_history = True
14
+
15
+
16
+ # Chatbot Config
17
+ def user_config(disabled_actions=None):
18
+ return ChatbotUserConfig(
19
+ class_names=dict(content="user-message-content"),
20
+ actions=[
21
+ "copy", "edit",
22
+ ChatbotActionConfig(
23
+ action="delete",
24
+ popconfirm=dict(title="Delete the message",
25
+ description="Are you sure to delete this message?",
26
+ okButtonProps=dict(danger=True)))
27
+ ],
28
+ disabled_actions=disabled_actions)
29
+
30
+
31
+ def bot_config(disabled_actions=None):
32
+ return ChatbotBotConfig(actions=[
33
+ "copy", "edit",
34
+ ChatbotActionConfig(
35
+ action="retry",
36
+ popconfirm=dict(
37
+ title="Regenerate the message",
38
+ description="Regenerate the message will also delete all subsequent messages.",
39
+ okButtonProps=dict(danger=True))),
40
+ ChatbotActionConfig(action="delete",
41
+ popconfirm=dict(
42
+ title="Delete the message",
43
+ description="Are you sure to delete this message?",
44
+ okButtonProps=dict(danger=True)))
45
+ ],
46
+ avatar=QWEN_LOGO_PATH,
47
+ disabled_actions=disabled_actions)
48
+
49
+
50
+ def welcome_config():
51
+ return ChatbotWelcomeConfig(
52
+ variant="borderless",
53
+ icon=QWEN_LOGO_PATH,
54
+ title="Hello, I'm Requirements Assistant",
55
+ description="Upload your requirements document and ask a question. I will help show compliance information.",
56
+ prompts=dict(
57
+ title="How can I help you today?",
58
+ styles={
59
+ "list": {
60
+ "width": '100%',
61
+ },
62
+ "item": {
63
+ "flex": 1,
64
+ },
65
+ },
66
+ items=[{
67
+ "label":
68
+ "Check Requirements",
69
+ "children": [{
70
+ "description": "What are lighting requirements when using intermediate or wet-weather tyres?",
71
+ }, {
72
+ "description": "When using intermediate or wet-weather tyres in a race without a safety car, what are the regulations for the lights?",
73
+ }, {
74
+ "description": "When there is a safety car during a race, when should lapped cars unlap themselves?",
75
+ }]
76
+ }]),
77
+ )
78
+
79
+
80
+ DEFAULT_SUGGESTIONS = [{
81
+ "label": 'Make a plan',
82
+ "value": 'Make a plan',
83
+ "children": [{
84
+ "label": "Start a business",
85
+ "value": "Help me with a plan to start a business"
86
+ }, {
87
+ "label": "Achieve my goals",
88
+ "value": "Help me with a plan to achieve my goals"
89
+ }, {
90
+ "label": "Successful interview",
91
+ "value": "Help me with a plan for a successful interview"
92
+ }]
93
+ }, {
94
+ "label": 'Help me write',
95
+ "value": "Help me write",
96
+ "children": [{
97
+ "label": "Story with a twist ending",
98
+ "value": "Help me write a story with a twist ending"
99
+ }, {
100
+ "label": "Blog post on mental health",
101
+ "value": "Help me write a blog post on mental health"
102
+ }, {
103
+ "label": "Letter to my future self",
104
+ "value": "Help me write a letter to my future self"
105
+ }]
106
+ }]
107
+
108
+ DEFAULT_SYS_PROMPT = "You are a helpful and harmless assistant."
109
+
110
+ MIN_THINKING_BUDGET = 1
111
+
112
+ MAX_THINKING_BUDGET = 38
113
+
114
+ DEFAULT_THINKING_BUDGET = 38
115
+
116
+ DEFAULT_LOCALE = 'zh_CN' if is_cn else 'en_US'
117
+
118
+ DEFAULT_THEME = {
119
+ "token": {
120
+ "colorPrimary": "#6A57FF",
121
+ }
122
+ }
123
+
124
+ DEFAULT_SETTINGS = {
125
+ "sys_prompt": DEFAULT_SYS_PROMPT,
126
+ "uploaded_file": None,
127
+ }
pipelines/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+
pipelines/__pycache__/__init__.cpython-312.pyc ADDED
Binary file (197 Bytes). View file
 
pipelines/__pycache__/requirements_pipe.cpython-312.pyc ADDED
Binary file (10.5 kB). View file
 
pipelines/requirements_pipe.py ADDED
@@ -0,0 +1,230 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Template pipeline that runs a simple RAG -> router -> agent flow and streams
3
+ text back in the shape expected by Gradio_Events.submit.
4
+ """
5
+ import os
6
+ from dataclasses import dataclass
7
+ from typing import Iterator, Iterable, List, Dict, Any
8
+
9
+ from openai import OpenAI
10
+
11
+
12
+ # ---- Streaming message shape expected by app.py ----
13
+ @dataclass
14
+ class DeltaMessage:
15
+ content: str | None = None
16
+ reasoning_content: str | None = None # leave None when you only stream text
17
+
18
+
19
+ @dataclass
20
+ class Choice:
21
+ message: DeltaMessage
22
+
23
+
24
+ @dataclass
25
+ class Output:
26
+ choices: list[Choice]
27
+
28
+
29
+ @dataclass
30
+ class Chunk:
31
+ output: Output
32
+
33
+
34
+ # ---- Example RAG / Router / Agent stubs ----
35
+ class RAGModel:
36
+ """Handles retrieval + requirement extraction."""
37
+
38
+ def __init__(self, retriever, llm):
39
+ self.retriever = retriever
40
+ self.llm = llm
41
+
42
+ def extract_requirements(self, query: str) -> dict:
43
+ docs = self.retriever.search(query)
44
+ # Replace with your own synthesis and compliance assessment.
45
+ requirements = self.llm.summarize(query=query, docs=docs)
46
+ compliant = self._assess_compliance(requirements)
47
+ return {
48
+ "requirements": requirements,
49
+ "compliant": compliant,
50
+ }
51
+
52
+ def _assess_compliance(self, requirements: str) -> bool:
53
+ """
54
+ Placeholder compliance check. Replace with your actual evaluator
55
+ (e.g., rule-based, classifier, or LLM judge).
56
+ """
57
+ text = requirements.lower()
58
+ non_compliant_markers = ["gap", "missing", "non-compliant", "not compliant", "fail"]
59
+ return not any(marker in text for marker in non_compliant_markers)
60
+
61
+
62
+ class Router:
63
+ """Chooses a target pipeline for the extracted requirements."""
64
+
65
+ def route(self, *, compliant: bool, requirements: str) -> str:
66
+ """
67
+ Route to Jira when non-compliant, otherwise to the compliance matrix.
68
+ """
69
+ return "matrix" if compliant else "jira"
70
+
71
+
72
+ class JiraAgent:
73
+ """Generates Jira ticket content using a Qwen model on OpenRouter and streams text."""
74
+
75
+ def __init__(self,
76
+ model: str = "qwen/qwen3-4b:free",
77
+ api_key: str | None = None):
78
+ resolved_key = api_key or os.getenv("OPENROUTER_API_KEY") \
79
+ or os.getenv("OPENAI_API_KEY")
80
+ if not resolved_key:
81
+ raise ValueError(
82
+ "Missing OpenRouter API key: set OPENROUTER_API_KEY (preferred) "
83
+ "or OPENAI_API_KEY in the environment, or pass api_key to JiraAgent"
84
+ )
85
+
86
+ self.model = model
87
+ self.client = OpenAI(
88
+ base_url="https://openrouter.ai/api/v1",
89
+ api_key=resolved_key,
90
+ )
91
+
92
+ def stream(self, requirements: str) -> Iterable[str]:
93
+ system_prompt = (
94
+ "You are a Jira assistant. Respond ONLY with compact JSON in this exact shape:\n"
95
+ '{"summary": "one-line goal", "description": "concise context and expected behavior", '
96
+ '"acceptance_criteria": ["bullet 1", "bullet 2", "bullet 3"]}\n'
97
+ "No prose, no markdown, no extra keys.")
98
+ user_prompt = (
99
+ "Create a Jira ticket for these requirements:\n"
100
+ f"{requirements}")
101
+
102
+ stream = self.client.chat.completions.create(
103
+ model=self.model,
104
+ messages=[{
105
+ "role": "system",
106
+ "content": system_prompt
107
+ }, {
108
+ "role": "user",
109
+ "content": user_prompt
110
+ }],
111
+ max_tokens=512,
112
+ temperature=0.3,
113
+ stream=True,
114
+ )
115
+
116
+ for chunk in stream:
117
+ delta = chunk.choices[0].delta
118
+ if delta and delta.content:
119
+ # Yield raw text increments so the frontend can stream.
120
+ yield delta.content
121
+
122
+
123
+ class ComplianceMatrixAgent:
124
+ """Creates a compliance matrix CSV using a Qwen model on OpenRouter and streams CSV text."""
125
+
126
+ def __init__(self,
127
+ model: str = "qwen/qwen3-4b:free",
128
+ api_key: str | None = None):
129
+ resolved_key = api_key or os.getenv("OPENROUTER_API_KEY") \
130
+ or os.getenv("OPENAI_API_KEY")
131
+ if not resolved_key:
132
+ raise ValueError(
133
+ "Missing OpenRouter API key: set OPENROUTER_API_KEY (preferred) "
134
+ "or OPENAI_API_KEY in the environment, or pass api_key to ComplianceMatrixAgent"
135
+ )
136
+
137
+ self.model = model
138
+ self.client = OpenAI(
139
+ base_url="https://openrouter.ai/api/v1",
140
+ api_key=resolved_key,
141
+ )
142
+
143
+ def stream(self, requirements: str) -> Iterable[str]:
144
+ system_prompt = (
145
+ "You are a compliance analyst. Produce ONLY a markdown table with headers:\n"
146
+ "| Requirement | Control | Status | Notes |\n"
147
+ "Map the given requirements to likely controls; set Status to Pending; "
148
+ "keep Notes concise. No prose before or after the table.")
149
+ user_prompt = (
150
+ "Create a compliance matrix CSV for these requirements:\n"
151
+ f"{requirements}")
152
+
153
+ stream = self.client.chat.completions.create(
154
+ model=self.model,
155
+ messages=[{
156
+ "role": "system",
157
+ "content": system_prompt
158
+ }, {
159
+ "role": "user",
160
+ "content": user_prompt
161
+ }],
162
+ max_tokens=512,
163
+ temperature=0.3,
164
+ stream=True,
165
+ )
166
+
167
+ for chunk in stream:
168
+ delta = chunk.choices[0].delta
169
+ if delta and delta.content:
170
+ # Yield CSV text increments so the frontend can stream.
171
+ yield delta.content
172
+
173
+
174
+ # ---- Pipeline wrapper ----
175
+ class RequirementsPipeline:
176
+ """
177
+ Wraps RAG -> router -> agent into a streaming interface compatible with
178
+ Gradio_Events.submit.
179
+ """
180
+
181
+ def __init__(self, rag_model: RAGModel, router: Router,
182
+ jira_agent: JiraAgent, matrix_agent: ComplianceMatrixAgent):
183
+ self.rag_model = rag_model
184
+ self.router = router
185
+ self.agents = {
186
+ "jira": jira_agent,
187
+ "matrix": matrix_agent,
188
+ }
189
+
190
+ def _extract_user_query(self, messages: List[Dict[str, Any]]) -> str:
191
+ # Grab the last user message; adjust if you need a different strategy.
192
+ for message in reversed(messages):
193
+ if message.get("role") == "user":
194
+ return message.get("content", "")
195
+ return ""
196
+
197
+ def stream(self, *, messages: list[dict]) -> Iterator[Chunk]:
198
+ """Run RAG -> route -> agent and stream tokens as Chunk objects."""
199
+ query = self._extract_user_query(messages)
200
+ extraction = self.rag_model.extract_requirements(query)
201
+ requirements = extraction["requirements"]
202
+ compliant = extraction["compliant"]
203
+
204
+ target = self.router.route(compliant=compliant, requirements=requirements)
205
+
206
+ agent = self.agents.get(target)
207
+ if not agent:
208
+ raise ValueError(f"No agent configured for route '{target}'")
209
+
210
+ # Console visibility for debugging which agent is used.
211
+ print(f"[pipeline] routing to '{target}' (compliant={compliant})")
212
+
213
+ # Each agent streams plain text; front end accumulates it.
214
+ for token in agent.stream(requirements=requirements):
215
+ yield Chunk(
216
+ output=Output(
217
+ choices=[
218
+ Choice(
219
+ message=DeltaMessage(
220
+ content=token,
221
+ reasoning_content=None,
222
+ ))
223
+ ]))
224
+
225
+ def run(self, *, messages: list[dict]) -> str:
226
+ """Non-streaming helper that collects the full text response."""
227
+ parts: list[str] = []
228
+ for chunk in self.stream(messages=messages):
229
+ parts.append(chunk.output.choices[0].message.content or "")
230
+ return "".join(parts)
requirements.txt ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ gradio
2
+ modelscope_studio
3
+ google-genai
4
+ nltk
5
+ pypdf
6
+ sentence-transformers
7
+ numpy
8
+ openai
9
+ chromadb
ui_components/__pycache__/logo.cpython-312.pyc ADDED
Binary file (1.01 kB). View file
 
ui_components/__pycache__/settings_header.cpython-312.pyc ADDED
Binary file (2.62 kB). View file
 
ui_components/__pycache__/thinking_button.cpython-312.pyc ADDED
Binary file (1.71 kB). View file
 
ui_components/logo.py ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import modelscope_studio.components.antd as antd
2
+ import modelscope_studio.components.base as ms
3
+
4
+
5
+ def Logo():
6
+ with antd.Typography.Title(level=1,
7
+ elem_style=dict(fontSize=24,
8
+ padding=8,
9
+ margin=0)):
10
+ with antd.Flex(align="center", gap="small", justify="center"):
11
+ ms.Span("🤖")
12
+ ms.Span("RequireGPT")
ui_components/settings_header.py ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ import modelscope_studio.components.antd as antd
3
+ import modelscope_studio.components.antdx as antdx
4
+ import modelscope_studio.components.base as ms
5
+
6
+ from config import DEFAULT_SETTINGS
7
+
8
+
9
+ def SettingsHeader():
10
+ state = gr.State({"open": True})
11
+ with antdx.Sender.Header(title="Settings",
12
+ open=True) as settings_header:
13
+ with antd.Form(value=DEFAULT_SETTINGS) as settings_form:
14
+ with antd.Form.Item(label="Knowledge File"):
15
+ with antd.Flex(gap="small", align="center", wrap=True):
16
+ context_file = gr.File(label=None,
17
+ file_count="single",
18
+ file_types=[".txt", ".md", ".json", ".csv", ".pdf"],
19
+ type="filepath",
20
+ elem_classes="setting-form-file-upload")
21
+ remove_file_btn = antd.Button("Remove",
22
+ type="text",
23
+ danger=True)
24
+ file_status = gr.Markdown("No file uploaded",
25
+ elem_classes="setting-form-file-status")
26
+
27
+ def close_header(state_value):
28
+ state_value["open"] = False
29
+ return gr.update(value=state_value)
30
+
31
+ state.change(fn=lambda state_value: gr.update(open=state_value["open"]),
32
+ inputs=[state],
33
+ outputs=[settings_header])
34
+
35
+ settings_header.open_change(fn=close_header,
36
+ inputs=[state],
37
+ outputs=[state])
38
+
39
+ return state, settings_form, context_file, file_status, remove_file_btn
ui_components/thinking_button.py ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import modelscope_studio.components.antd as antd
2
+ import modelscope_studio.components.base as ms
3
+ import gradio as gr
4
+
5
+
6
+ def ThinkingButton():
7
+ state = gr.State({"enable_thinking": True})
8
+ with antd.Button("Thinking",
9
+ shape="round",
10
+ color="primary",
11
+ variant="solid") as thinking_btn:
12
+ with ms.Slot("icon"):
13
+ antd.Icon("SunOutlined")
14
+
15
+ def toggle_thinking(state_value):
16
+ state_value["enable_thinking"] = not state_value["enable_thinking"]
17
+ return gr.update(value=state_value)
18
+
19
+ def apply_state_change(state_value):
20
+ return gr.update(
21
+ variant="solid" if state_value["enable_thinking"] else "")
22
+
23
+ state.change(fn=apply_state_change, inputs=[state], outputs=[thinking_btn])
24
+
25
+ thinking_btn.click(fn=toggle_thinking, inputs=[state], outputs=[state])
26
+
27
+ return state