YazeedBinShihah commited on
Commit
1155df6
·
verified ·
1 Parent(s): 4ad0710

Upload 7 files

Browse files
Files changed (8) hide show
  1. .env.example +9 -0
  2. .gitattributes +1 -0
  3. README.md +44 -13
  4. app.py +978 -0
  5. architecture.pdf +3 -0
  6. quizzes_db.json +253 -0
  7. requirements.txt +5 -0
  8. smart_tutor_core.py +677 -0
.env.example ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ # OpenAI API Key
2
+ OPENAI_API_KEY=your_openai_api_key_here
3
+
4
+ # App Configuration
5
+ DETERMINISTIC_TEMPERATURE=0.1
6
+ TOOL_MAX_RETRIES=2
7
+ MAX_FILE_SIZE_MB=500
8
+ MAX_PDF_PAGES=2000
9
+ PDF_EXTRACTION_TIMEOUT=200
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ architecture.pdf filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,13 +1,44 @@
1
- ---
2
- title: SMART TUTOR
3
- emoji: 🐢
4
- colorFrom: blue
5
- colorTo: red
6
- sdk: gradio
7
- sdk_version: 6.5.1
8
- app_file: app.py
9
- pinned: false
10
- short_description: SmartTutor AI is a multi-agent system built with CrewAI
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 🧠 SmartTutor AI
2
+
3
+ SmartTutor AI is an intelligent educational assistant built with **CrewAI** and **Gradio**. It helps users summarize documents, generate quizzes, and get detailed explanations for their mistakes.
4
+
5
+ ## ✨ Features
6
+
7
+ - **Document Summarization**: Get concise summaries from PDF or Text files.
8
+ - **Quiz Generation**: Automatically create multiple-choice quizzes based on document content.
9
+ - **Intelligent Grading**: Submit quiz answers and get detailed explanations for errors.
10
+ - **Quick Actions**: One-click shortcuts for common tasks.
11
+ - **Persistent Storage**: Quizzes are saved locally to `quizzes_db.json`.
12
+
13
+ ## 🛠️ Setup
14
+
15
+ ### 1. Requirements
16
+ Ensure you have Python 3.9+ installed.
17
+
18
+ ### 2. Installation
19
+ Install the dependencies:
20
+ ```bash
21
+ pip install -r requirements.txt
22
+ ```
23
+
24
+ ### 3. Environment Variables
25
+ 1. Create a `.env` file from the example:
26
+ ```bash
27
+ cp .env.example .env
28
+ ```
29
+ 2. Open `.env` and add your `OPENAI_API_KEY`.
30
+
31
+ ## 🚀 Usage
32
+
33
+ Run the application:
34
+ ```bash
35
+ python app.py
36
+ ```
37
+ The interface will be available at `http://localhost:7860`.
38
+
39
+ ## 📁 Project Structure
40
+
41
+ - `app.py`: Main application and UI.
42
+ - `quizzes_db.json`: Local storage for generated quizzes.
43
+ - `requirements.txt`: Python dependencies.
44
+ - `.env`: Secret configuration (not included in version control).
app.py ADDED
@@ -0,0 +1,978 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ import os
3
+ import sys
4
+ import json
5
+ import re
6
+
7
+ # Ensure the current directory is in the path
8
+ current_dir = os.path.dirname(os.path.abspath(__file__))
9
+ sys.path.append(current_dir)
10
+
11
+ from smart_tutor_core import crew
12
+
13
+ # ----------------------------------------------------------------------
14
+ # Helper: Parse Output
15
+ # ----------------------------------------------------------------------
16
+
17
+
18
+ def parse_agent_output(raw_output: str):
19
+ """
20
+ Tries to parse JSON from the raw string output.
21
+ Returns (data_dict, is_json).
22
+ """
23
+ data = None
24
+ try:
25
+ data = json.loads(raw_output)
26
+ return data, True
27
+ except json.JSONDecodeError:
28
+ # Try finding JSON block
29
+ match = re.search(r"(\{.*\})", raw_output, re.DOTALL)
30
+ if match:
31
+ try:
32
+ data = json.loads(match.group(1))
33
+ return data, True
34
+ except:
35
+ pass
36
+ return raw_output, False
37
+
38
+
39
+ def clean_text(text: str) -> str:
40
+ """
41
+ Aggressively removes markdown formatting to ensure clean text display.
42
+ Removes: **bold**, __bold__, *italic*, _italic_, `code`
43
+ """
44
+ if not text:
45
+ return ""
46
+ text = str(text)
47
+ # Remove bold/italic markers
48
+ text = re.sub(r"\*\*|__|`", "", text)
49
+ text = re.sub(r"^\s*\*\s+", "", text) # Remove leading list asterisks if any
50
+ return text.strip()
51
+
52
+
53
+ # ----------------------------------------------------------------------
54
+ # Helper: Format Text Output for Display
55
+ # ----------------------------------------------------------------------
56
+
57
+
58
+ def format_text_output(raw_text):
59
+ """
60
+ Converts raw agent text (markdown-ish) into
61
+ beautifully styled HTML inside a summary-box.
62
+ """
63
+ if not raw_text:
64
+ return ""
65
+ text = str(raw_text).strip()
66
+
67
+ # Convert markdown headings to HTML
68
+ text = re.sub(r"^### (.+)$", r"<h3>\1</h3>", text, flags=re.MULTILINE)
69
+ text = re.sub(r"^## (.+)$", r"<h2>\1</h2>", text, flags=re.MULTILINE)
70
+ text = re.sub(r"^# (.+)$", r"<h2>\1</h2>", text, flags=re.MULTILINE)
71
+
72
+ # Convert **bold** to <strong>
73
+ text = re.sub(r"\*\*(.+?)\*\*", r"<strong>\1</strong>", text)
74
+
75
+ # Convert bullet lists (- item or * item)
76
+ lines = text.split("\n")
77
+ result = []
78
+ in_list = False
79
+
80
+ for line in lines:
81
+ stripped = line.strip()
82
+ is_bullet = (
83
+ stripped.startswith("- ")
84
+ or stripped.startswith("* ")
85
+ or re.match(r"^\d+\.\s", stripped)
86
+ )
87
+
88
+ if is_bullet:
89
+ if not in_list:
90
+ tag = "ul" # Always use bullets as requested
91
+ result.append(f"<{tag}>")
92
+ in_list = tag
93
+ # Remove both -/* and 1. from the start of the line
94
+ content = re.sub(r"^[-*]\s+|^\d+\.\s+", "", stripped)
95
+ result.append(f"<li>{content}</li>")
96
+ else:
97
+ if in_list:
98
+ result.append(f"</{in_list}>")
99
+ in_list = False
100
+ if stripped.startswith("<h"):
101
+ result.append(stripped)
102
+ elif stripped:
103
+ result.append(f"<p>{stripped}</p>")
104
+
105
+ if in_list:
106
+ result.append(f"</{in_list}>")
107
+
108
+ html = "\n".join(result)
109
+ return f"<div class='summary-box'>{html}</div>"
110
+
111
+
112
+ # ----------------------------------------------------------------------
113
+ # Logic: Run Agent
114
+ # ----------------------------------------------------------------------
115
+
116
+
117
+ def run_agent(file, user_text):
118
+ if not user_text and not file:
119
+ return (
120
+ gr.update(
121
+ visible=True,
122
+ value="<div class='error-box'>⚠️ Please enter a request or upload a file.</div>",
123
+ ),
124
+ gr.update(visible=False), # Quiz Group
125
+ None, # State
126
+ )
127
+
128
+ full_request = user_text
129
+
130
+ # Check if user wants a quiz but didn't upload a file (common error)
131
+ if "quiz" in user_text.lower() and not file:
132
+ return (
133
+ gr.update(
134
+ visible=True,
135
+ value="<div class='error-box'>⚠️ To generate a quiz, please upload a document first.</div>",
136
+ ),
137
+ gr.update(visible=False),
138
+ None,
139
+ )
140
+
141
+ if file:
142
+ # file is a filepath string because type='filepath'
143
+ full_request = f"""USER REQUEST: {user_text}
144
+
145
+ IMPORTANT: The file to process is located at this EXACT path:
146
+ {file}
147
+
148
+ You MUST use this exact path when calling tools (process_file, store_quiz, etc.)."""
149
+
150
+ # SYSTEM PROMPT INJECTION to force JSON format from the agent
151
+ system_instruction = "\n\n(SYSTEM NOTE: If generating a quiz, you MUST call the store_quiz tool and return its VALID JSON output including 'quiz_id'. Do NOT return just the questions text.)"
152
+
153
+ try:
154
+ inputs = {"user_request": full_request + system_instruction}
155
+ result = crew.kickoff(inputs=inputs)
156
+ raw_output = str(result)
157
+
158
+ print(f"\n{'='*60}")
159
+ print(f"[DEBUG] raw_output (first 500 chars):")
160
+ print(raw_output[:500])
161
+ print(f"{'='*60}")
162
+
163
+ data, is_json = parse_agent_output(raw_output)
164
+
165
+ print(f"[DEBUG] is_json={is_json}")
166
+ if is_json:
167
+ print(
168
+ f"[DEBUG] keys={list(data.keys()) if isinstance(data, dict) else 'not a dict'}"
169
+ )
170
+ if isinstance(data, dict) and "questions" in data:
171
+ print(f"[DEBUG] num questions={len(data['questions'])}")
172
+
173
+ # Case 1: Quiz Output (Success)
174
+ if is_json and "questions" in data:
175
+ # We accept it even if quiz_id is missing, but grading might fail.
176
+ return (
177
+ gr.update(visible=False), # Hide Summary
178
+ gr.update(visible=True), # Show Quiz
179
+ data, # Store Data
180
+ )
181
+
182
+ # Case 2: Grade Result (Standard JSON from grade_quiz) - Handled nicely
183
+ if is_json and "score" in data:
184
+ markdown = format_grade_result(data)
185
+ return (
186
+ gr.update(visible=True, value=markdown),
187
+ gr.update(visible=False),
188
+ None,
189
+ )
190
+
191
+ # Case 3: Normal Text / Summary / Explanation
192
+ html_content = format_text_output(raw_output)
193
+ return (
194
+ gr.update(visible=True, value=html_content),
195
+ gr.update(visible=False),
196
+ None,
197
+ )
198
+
199
+ except Exception as e:
200
+ error_msg = f"<div class='error-box'>❌ Error: {str(e)}</div>"
201
+ return (
202
+ gr.update(visible=True, value=error_msg),
203
+ gr.update(visible=False),
204
+ None,
205
+ )
206
+
207
+
208
+ # ----------------------------------------------------------------------
209
+ # Logic: Quiz Render & Grading
210
+ # ----------------------------------------------------------------------
211
+
212
+
213
+ def render_quiz(quiz_data):
214
+ """
215
+ Renders the quiz questions dynamically.
216
+ Returns updates for: [Radios x10] + [Feedbacks x10] + [CheckBtn] (Total 21)
217
+ """
218
+ updates = []
219
+
220
+ if not quiz_data:
221
+ # Hide everything
222
+ return [gr.update(visible=False)] * 21
223
+
224
+ questions = quiz_data.get("questions", [])
225
+
226
+ # 1. Update Radios (10 slots)
227
+ for i in range(10):
228
+ if i < len(questions):
229
+ q = questions[i]
230
+ q_txt = clean_text(q.get("question", "Question text missing"))
231
+ question_text = f"{i+1}. {q_txt}"
232
+
233
+ # Ensure options are a dict and sorted
234
+ raw_options = q.get("options", {})
235
+ if not isinstance(raw_options, dict):
236
+ # Fallback if options came as a list or string
237
+ raw_options = {"A": "Error loading options"}
238
+
239
+ # Sort by key A, B, C, D...
240
+ # We strictly enforce the "Key. Value" format
241
+ choices = []
242
+ for key in sorted(raw_options.keys()):
243
+ val = clean_text(raw_options[key])
244
+ choices.append(f"{key}. {val}")
245
+
246
+ updates.append(
247
+ gr.update(
248
+ visible=True,
249
+ label=question_text,
250
+ choices=choices,
251
+ value=None,
252
+ interactive=True,
253
+ )
254
+ )
255
+ else:
256
+ updates.append(gr.update(visible=False, choices=[], value=None))
257
+
258
+ # 2. Update Feedbacks (10 slots) - Hide them initially
259
+ for i in range(10):
260
+ updates.append(gr.update(visible=False, value=""))
261
+
262
+ # 3. Show Grid/Check Button
263
+ updates.append(gr.update(visible=True))
264
+
265
+ return updates
266
+
267
+
268
+ def grade_quiz_ui(quiz_data, *args):
269
+ """
270
+ Collects answers, calls agent (or tool), and returns graded results designed for UI.
271
+ Input args: [Radio1_Val, Radio2_Val, ..., Radio10_Val] (Length 10)
272
+ Output: [Radios x10] + [Feedbacks x10] + [ResultMsg] (Total 21)
273
+ """
274
+ # args tuple contains the values of the 10 radios
275
+ answers_list = args[0:10]
276
+
277
+ updates = []
278
+
279
+ # Validation
280
+ if not quiz_data or "quiz_id" not in quiz_data:
281
+ # Fallback if ID is missing
282
+ error_updates = [gr.update()] * 10 + [gr.update()] * 10
283
+ error_updates.append(
284
+ gr.update(
285
+ visible=True,
286
+ value="<div class='error-box'>⚠️ Error: Quiz ID not found. Cannot grade this quiz.</div>",
287
+ )
288
+ )
289
+ return error_updates
290
+
291
+ quiz_id = quiz_data["quiz_id"]
292
+
293
+ # Construct answer map {"1": "A", ...}
294
+ user_answers = {}
295
+ for i, ans in enumerate(answers_list):
296
+ if ans:
297
+ # ans is like "A. Option Text" -> extract "A"
298
+ selected_opt = ans.split(".")[0]
299
+ # Use qid from data if available, else i+1
300
+ qid = str(i + 1)
301
+ # Try to match qid from quiz_data if possible
302
+ if i < len(quiz_data.get("questions", [])):
303
+ q = quiz_data["questions"][i]
304
+ qid = str(q.get("qid", i + 1))
305
+
306
+ user_answers[qid] = selected_opt
307
+
308
+ # Construct the JSON for the agent
309
+ answers_json = json.dumps(user_answers)
310
+ grading_request = f"Grade quiz {quiz_id} with answers {answers_json}\n(SYSTEM: Return valid JSON matching GradeQuizResult schema.)"
311
+
312
+ try:
313
+ inputs = {"user_request": grading_request}
314
+ result = crew.kickoff(inputs=inputs)
315
+ raw_output = str(result)
316
+ data, is_json = parse_agent_output(raw_output)
317
+
318
+ if is_json and "score" in data:
319
+ return format_grade_result_interactive(data, answers_list)
320
+ else:
321
+ # Fallback error in result box
322
+ error_updates = [gr.update()] * 10 + [gr.update()] * 10
323
+ error_updates.append(
324
+ gr.update(
325
+ visible=True,
326
+ value=f"<div class='error-box'>Error parsing grading result: {raw_output}</div>",
327
+ )
328
+ )
329
+ return error_updates
330
+
331
+ except Exception as e:
332
+ error_updates = [gr.update()] * 10 + [gr.update()] * 10
333
+ error_updates.append(
334
+ gr.update(
335
+ visible=True, value=f"<div class='error-box'>Error: {str(e)}</div>"
336
+ )
337
+ )
338
+ return error_updates
339
+
340
+
341
+ def format_grade_result_interactive(data, user_answers_list):
342
+ """
343
+ Updates the UI with colors and correctness.
344
+ Returns 21 updates.
345
+ """
346
+ details = data.get("details", [])
347
+ # Map details by QID or index for safety
348
+ details_map = {}
349
+ for det in details:
350
+ details_map[str(det.get("qid"))] = det
351
+
352
+ radio_updates = []
353
+ feedback_updates = []
354
+
355
+ # Iterate 10 slots
356
+ for i in range(10):
357
+ # Find corresponding detail
358
+ # We assume strict ordering i=0 -> Q1
359
+ # But let's try to be smart with QID if possible
360
+ qid = (
361
+ str(data.get("details", [])[i].get("qid"))
362
+ if i < len(data.get("details", []))
363
+ else str(i + 1)
364
+ )
365
+ det = details_map.get(qid)
366
+
367
+ if det:
368
+ # Clean feedback text
369
+ correct_raw = det.get("correct_answer", "?")
370
+ correct = clean_text(correct_raw)
371
+
372
+ explanation_raw = det.get("explanation", "")
373
+ explanation = clean_text(explanation_raw)
374
+
375
+ is_correct = det.get("is_correct", False)
376
+
377
+ # 1. Lock Radio
378
+ radio_updates.append(gr.update(interactive=False))
379
+
380
+ # 2. Show Feedback Box
381
+ css_class = (
382
+ "feedback-box-correct" if is_correct else "feedback-box-incorrect"
383
+ )
384
+
385
+ # Title
386
+ title_text = "Correct Answer!" if is_correct else "Incorrect Answer."
387
+ title_icon = "✅" if is_correct else "❌"
388
+
389
+ html_content = f"""
390
+ <div class='{css_class}'>
391
+ <div class='feedback-header'>
392
+ <span class='feedback-icon'>{title_icon}</span>
393
+ <span class='feedback-title'>{title_text}</span>
394
+ </div>
395
+ <div class='feedback-body'>
396
+ <div class='feedback-correct-answer'><strong>Correct Answer:</strong> {correct}</div>
397
+ {'<div class="feedback-explanation"><strong>Explanation:</strong> ' + explanation + '</div>' if explanation else ''}
398
+ </div>
399
+ </div>
400
+ """
401
+
402
+ feedback_updates.append(gr.update(visible=True, value=html_content))
403
+ else:
404
+ # No detail (maybe question didn't exist)
405
+ radio_updates.append(gr.update(visible=False))
406
+ feedback_updates.append(gr.update(visible=False))
407
+
408
+ # 3. Final Score Msg
409
+ percentage = data.get("percentage", 0)
410
+ emoji = "🏆" if percentage >= 80 else "📊"
411
+
412
+ # Create a nice result card
413
+ score_html = f"""
414
+ <div class='result-card'>
415
+ <div class='result-header'>{emoji} Quiz Completed!</div>
416
+ <div class='result-score'>Your Score: {data.get('score')} / {data.get('total')}</div>
417
+ <div class='result-percentage'>({percentage}%)</div>
418
+ </div>
419
+ """
420
+
421
+ return (
422
+ radio_updates + feedback_updates + [gr.update(visible=True, value=score_html)]
423
+ )
424
+
425
+
426
+ def format_grade_result(data):
427
+ """Standard markdown formatter for standalone grade result"""
428
+ score = data.get("percentage", 0)
429
+ emoji = "🎉" if score > 70 else "📚"
430
+ md = f"# {emoji} Score: {data.get('score')}/{data.get('total')}\n\n"
431
+ for cx in data.get("details", []):
432
+ md += f"- **Q{cx['qid']}**: {cx['is_correct'] and '✅' or '❌'} (Correct: {cx.get('correct_answer')})\n"
433
+ return md
434
+
435
+
436
+ # ----------------------------------------------------------------------
437
+ # CSS Styling
438
+ # ----------------------------------------------------------------------
439
+
440
+ custom_css = """
441
+ @import url('https://fonts.googleapis.com/css2?family=Poppins:wght@300;400;500;600;700&display=swap');
442
+
443
+ body {
444
+ font-family: 'Poppins', sans-serif !important;
445
+ background: #f8fafc; /* Lighter background */
446
+ color: #334155;
447
+ font-weight: 400; /* Regular weight by default */
448
+ }
449
+
450
+ .gradio-container {
451
+ max-width: 900px !important;
452
+ margin: 40px auto !important;
453
+ background: #ffffff;
454
+ border-radius: 24px;
455
+ box-shadow: 0 20px 40px -10px rgba(0,0,0,0.1);
456
+ padding: 0 !important;
457
+ overflow: hidden;
458
+ border: 1px solid rgba(255,255,255,0.8);
459
+ }
460
+
461
+ /* ================= HEADER ================= */
462
+ .header-box {
463
+ background: linear-gradient(135deg, #4f46e5 0%, #7c3aed 100%);
464
+ color: white;
465
+ padding: 60px 40px;
466
+ text-align: center;
467
+ position: relative;
468
+ overflow: hidden;
469
+ margin-bottom: 30px;
470
+ }
471
+
472
+ .header-box::before {
473
+ content: '';
474
+ position: absolute;
475
+ top: -50%;
476
+ left: -50%;
477
+ width: 200%;
478
+ height: 200%;
479
+ background: radial-gradient(circle, rgba(255,255,255,0.1) 0%, transparent 60%);
480
+ animation: rotate 20s linear infinite;
481
+ }
482
+
483
+ .header-box h1 {
484
+ color: white !important;
485
+ margin: 0;
486
+ font-size: 3em !important;
487
+ font-weight: 700;
488
+ letter-spacing: -1px;
489
+ text-shadow: 0 4px 10px rgba(0,0,0,0.2);
490
+ position: relative;
491
+ z-index: 1;
492
+ }
493
+
494
+ .header-box p {
495
+ color: #e0e7ff !important;
496
+ font-size: 1.25em !important;
497
+ margin-top: 15px;
498
+ font-weight: 300;
499
+ position: relative;
500
+ z-index: 1;
501
+ }
502
+
503
+ @keyframes rotate {
504
+ from { transform: rotate(0deg); }
505
+ to { transform: rotate(360deg); }
506
+ }
507
+
508
+ /* ================= INPUT PANEL ================= */
509
+ .gradio-row {
510
+ gap: 30px !important;
511
+ padding: 0 40px 40px 40px;
512
+ }
513
+
514
+ /* Logic to remove padding from internal rows if needed, simplified here */
515
+
516
+ /* Buttons */
517
+ button.primary {
518
+ background: linear-gradient(90deg, #4f46e5 0%, #6366f1 100%) !important;
519
+ border: none !important;
520
+ color: white !important;
521
+ font-weight: 600 !important;
522
+ padding: 12px 24px !important;
523
+ border-radius: 12px !important;
524
+ box-shadow: 0 4px 15px rgba(79, 70, 229, 0.4) !important;
525
+ transition: all 0.3s ease !important;
526
+ }
527
+
528
+ button.primary:hover {
529
+ transform: translateY(-2px);
530
+ box-shadow: 0 8px 25px rgba(79, 70, 229, 0.5) !important;
531
+ }
532
+
533
+ button.secondary {
534
+ background: #f3f4f6 !important;
535
+ color: #4b5563 !important;
536
+ border: 1px solid #e5e7eb !important;
537
+ border-radius: 12px !important;
538
+ }
539
+
540
+ button.secondary:hover {
541
+ background: #e5e7eb !important;
542
+ }
543
+
544
+ /* ================= QUIZ CARDS ================= */
545
+ .quiz-question {
546
+ background: #ffffff;
547
+ border-radius: 16px;
548
+ padding: 25px;
549
+ margin-bottom: 30px !important;
550
+ border: 1px solid #e5e7eb;
551
+ box-shadow: 0 10px 15px -3px rgba(0, 0, 0, 0.03), 0 4px 6px -2px rgba(0, 0, 0, 0.02);
552
+ transition: transform 0.2s ease, box-shadow 0.2s ease;
553
+ }
554
+
555
+ .quiz-question:hover {
556
+ transform: translateY(-2px);
557
+ box-shadow: 0 20px 25px -5px rgba(0, 0, 0, 0.05), 0 10px 10px -5px rgba(0, 0, 0, 0.02);
558
+ }
559
+
560
+ .quiz-question span { /* Label/Title */
561
+ font-size: 1.15em !important;
562
+ font-weight: 600 !important;
563
+ color: #111827;
564
+ margin-bottom: 20px;
565
+ display: block;
566
+ line-height: 1.5;
567
+ }
568
+
569
+ /* Options Wrapper (The Radio Group) */
570
+ .quiz-question .wrap {
571
+ display: flex !important;
572
+ flex-direction: column !important;
573
+ gap: 12px !important;
574
+ }
575
+
576
+ /* Individual Option Label */
577
+ .quiz-question .wrap label {
578
+ display: flex !important;
579
+ align-items: center !important;
580
+ background: #f9fafb;
581
+ border: 2px solid #e5e7eb !important; /* Thick border */
582
+ padding: 15px 20px !important;
583
+ border-radius: 12px !important;
584
+ cursor: pointer;
585
+ transition: all 0.2s cubic-bezier(0.4, 0, 0.2, 1);
586
+ font-size: 1.05em;
587
+ color: #4b5563;
588
+ }
589
+
590
+ .quiz-question .wrap label:hover {
591
+ background: #f3f4f6;
592
+ border-color: #6366f1 !important;
593
+ color: #4f46e5;
594
+ }
595
+
596
+ .quiz-question .wrap label.selected {
597
+ background: #eef2ff !important;
598
+ border-color: #4f46e5 !important;
599
+ color: #4338ca !important;
600
+ font-weight: 600;
601
+ box-shadow: 0 4px 6px -1px rgba(79, 70, 229, 0.1);
602
+ }
603
+
604
+ /* Hide default circle if possible, or style it.
605
+ Gradio's radio inputs are tricky to hide fully without breaking accessibility,
606
+ but we can style the container enough. */
607
+
608
+ /* ================= RESULTS & FEEDBACK ================= */
609
+
610
+ /* Success/Error Cards */
611
+ .feedback-box-correct, .feedback-box-incorrect {
612
+ margin-top: 20px;
613
+ padding: 20px;
614
+ border-radius: 12px;
615
+ animation: popIn 0.4s cubic-bezier(0.175, 0.885, 0.32, 1.275);
616
+ position: relative;
617
+ overflow: hidden;
618
+ }
619
+
620
+ .feedback-box-correct {
621
+ background: linear-gradient(135deg, #ecfdf5 0%, #d1fae5 100%);
622
+ border: 1px solid #10b981;
623
+ color: #065f46;
624
+ }
625
+
626
+ .feedback-box-incorrect {
627
+ background: linear-gradient(135deg, #fef2f2 0%, #fee2e2 100%);
628
+ border: 1px solid #ef4444;
629
+ color: #991b1b;
630
+ }
631
+
632
+ .feedback-header {
633
+ display: flex;
634
+ align-items: center;
635
+ gap: 12px;
636
+ margin-bottom: 12px;
637
+ font-size: 1.2em;
638
+ font-weight: 700;
639
+ }
640
+
641
+ .feedback-icon {
642
+ font-size: 1.4em;
643
+ background: rgba(255,255,255,0.5);
644
+ border-radius: 50%;
645
+ width: 32px;
646
+ height: 32px;
647
+ display: flex;
648
+ align-items: center;
649
+ justify-content: center;
650
+ box-shadow: 0 2px 4px rgba(0,0,0,0.05);
651
+ }
652
+
653
+ .feedback-body {
654
+ background: rgba(255,255,255,0.4);
655
+ padding: 15px;
656
+ border-radius: 8px;
657
+ font-size: 1em;
658
+ line-height: 1.6;
659
+ }
660
+
661
+ .feedback-correct-answer {
662
+ font-weight: 600;
663
+ margin-bottom: 8px;
664
+ color: #064e3b; /* darker green */
665
+ }
666
+ .feedback-box-incorrect .feedback-correct-answer {
667
+ color: #7f1d1d; /* darker red */
668
+ }
669
+
670
+ /* Summary / Explanation Box */
671
+ .summary-box {
672
+ background: linear-gradient(135deg, #ffffff 0%, #f8faff 100%);
673
+ border-radius: 20px;
674
+ padding: 35px 40px;
675
+ border: 1px solid #e0e7ff;
676
+ box-shadow: 0 8px 30px rgba(79, 70, 229, 0.06);
677
+ font-size: 1.05em;
678
+ line-height: 1.9;
679
+ color: #374151;
680
+ position: relative;
681
+ overflow: hidden;
682
+ }
683
+
684
+ .summary-box::before {
685
+ content: '';
686
+ position: absolute;
687
+ top: 0;
688
+ left: 0;
689
+ right: 0;
690
+ height: 4px;
691
+ background: linear-gradient(90deg, #4f46e5, #7c3aed, #a78bfa);
692
+ }
693
+
694
+ .summary-box h2 {
695
+ font-size: 1.4em;
696
+ font-weight: 700;
697
+ color: #312e81;
698
+ margin: 0 0 18px 0;
699
+ padding-bottom: 12px;
700
+ border-bottom: 2px solid #e0e7ff;
701
+ display: flex;
702
+ align-items: center;
703
+ gap: 10px;
704
+ }
705
+
706
+ .summary-box h3 {
707
+ font-size: 1.15em;
708
+ font-weight: 600;
709
+ color: #4338ca;
710
+ margin: 20px 0 10px 0;
711
+ }
712
+
713
+ .summary-box p {
714
+ margin: 0 0 14px 0;
715
+ text-align: justify;
716
+ }
717
+
718
+ .summary-box ul, .summary-box ol {
719
+ margin: 10px 0 16px 0;
720
+ padding-left: 24px;
721
+ }
722
+
723
+ .summary-box li {
724
+ margin-bottom: 8px;
725
+ position: relative;
726
+ }
727
+
728
+ .summary-box strong {
729
+ color: #312e81;
730
+ font-weight: 600;
731
+ }
732
+
733
+ .summary-box .summary-footer {
734
+ margin-top: 20px;
735
+ padding-top: 14px;
736
+ border-top: 1px solid #e0e7ff;
737
+ font-size: 0.85em;
738
+ color: #9ca3af;
739
+ text-align: left;
740
+ }
741
+
742
+ /* Example Buttons */
743
+ #examples-container {
744
+ margin: 15px 0;
745
+ padding: 10px;
746
+ background: #f3f4f6;
747
+ border-radius: 12px;
748
+ }
749
+
750
+ .example-btn {
751
+ background: #ffffff !important;
752
+ border: 1px solid #e5e7eb !important;
753
+ color: #6366f1 !important; /* Indigo text */
754
+ font-size: 0.85em !important;
755
+ padding: 2px 10px !important;
756
+ border-radius: 20px !important; /* Pill shape */
757
+ transition: all 0.2s ease !important;
758
+ font-weight: 500 !important;
759
+ box-shadow: 0 1px 2px rgba(0,0,0,0.05) !important;
760
+ }
761
+
762
+ .example-btn:hover {
763
+ background: #f5f7ff !important;
764
+ border-color: #6366f1 !important;
765
+ transform: translateY(-1px);
766
+ box-shadow: 0 4px 6px -1px rgba(99, 102, 241, 0.1) !important;
767
+ }
768
+
769
+ /* Result Card */
770
+ .result-card {
771
+ background: linear-gradient(135deg, #4f46e5 0%, #7c3aed 100%);
772
+ border-radius: 20px;
773
+ padding: 40px;
774
+ text-align: center;
775
+ color: white;
776
+ box-shadow: 0 20px 25px -5px rgba(79, 70, 229, 0.3);
777
+ margin-top: 40px;
778
+ animation: slideUp 0.6s cubic-bezier(0.16, 1, 0.3, 1);
779
+ }
780
+
781
+ .result-header {
782
+ font-size: 2em;
783
+ font-weight: 800;
784
+ margin-bottom: 15px;
785
+ text-shadow: 0 2px 4px rgba(0,0,0,0.1);
786
+ }
787
+
788
+ .result-score {
789
+ font-size: 3.5em;
790
+ font-weight: 800;
791
+ margin: 10px 0;
792
+ background: -webkit-linear-gradient(#ffffff, #e0e7ff);
793
+ -webkit-background-clip: text;
794
+ -webkit-text-fill-color: transparent;
795
+ }
796
+
797
+ .result-percentage {
798
+ font-size: 1.5em;
799
+ opacity: 0.9;
800
+ font-weight: 500;
801
+ }
802
+
803
+ /* Keyframes */
804
+ @keyframes popIn {
805
+ from { opacity: 0; transform: scale(0.95) translateY(-5px); }
806
+ to { opacity: 1; transform: scale(1) translateY(0); }
807
+ }
808
+
809
+ @keyframes slideUp {
810
+ from { opacity: 0; transform: translateY(40px); }
811
+ to { opacity: 1; transform: translateY(0); }
812
+ }
813
+
814
+ /* Hide Gradio Footer */
815
+ footer { display: none !important; }
816
+ .gradio-container .prose.footer-content { display: none !important; }
817
+ """
818
+
819
+ # ----------------------------------------------------------------------
820
+ # Main App
821
+ # ----------------------------------------------------------------------
822
+
823
+ with gr.Blocks(css=custom_css, title="SmartTutor AI") as demo:
824
+
825
+ # State
826
+ quiz_state = gr.State()
827
+
828
+ with gr.Column(elem_classes="header-box"):
829
+ gr.HTML(
830
+ """
831
+ <div style='color: white;'>
832
+ <h1 style='color: white; font-size: 3em; margin: 0;'>🧠 SmartTutor AI</h1>
833
+ <p style='color: #e0e7ff; font-size: 1.25em;'>Your intelligent companion for learning and assessment</p>
834
+ </div>
835
+ """
836
+ )
837
+
838
+ with gr.Row():
839
+ # Left Panel: Controls
840
+ with gr.Column(scale=1, variant="panel"):
841
+ file_input = gr.File(
842
+ label="📄 Upload Document", file_types=[".pdf", ".txt"], type="filepath"
843
+ )
844
+ user_input = gr.Textbox(
845
+ label="✍️ Request",
846
+ placeholder="e.g. 'Summarize this' or 'Create a quiz'",
847
+ lines=3,
848
+ )
849
+
850
+ # Quick Examples
851
+ with gr.Column(elem_id="examples-container"):
852
+ gr.Markdown("✨ **Quick Actions:**")
853
+ with gr.Row():
854
+ ex_summarize = gr.Button(
855
+ "📝 Summary (3 lines)", size="sm", elem_classes="example-btn"
856
+ )
857
+ ex_quiz = gr.Button(
858
+ "🧪 3 Questions", size="sm", elem_classes="example-btn"
859
+ )
860
+ with gr.Row():
861
+ ex_explain = gr.Button(
862
+ "💡 Main Concepts", size="sm", elem_classes="example-btn"
863
+ )
864
+
865
+ with gr.Row():
866
+ submit_btn = gr.Button("🚀 Run", variant="primary")
867
+ clear_btn = gr.Button("🧹 Clear")
868
+
869
+ # Right Panel: Results
870
+ with gr.Column(scale=2):
871
+
872
+ # 1. Summary / Text Output
873
+ summary_output = gr.HTML(visible=True)
874
+
875
+ # 2. Quiz Group (Hidden initially)
876
+ with gr.Group(visible=False) as quiz_group:
877
+ gr.Markdown("## 📝 Quiz Time")
878
+ gr.Markdown("Select the correct answer for each question.")
879
+
880
+ # Create 10 Questions + Feedback slots
881
+ q_radios = []
882
+ q_feedbacks = []
883
+
884
+ for i in range(10):
885
+ # Radio
886
+ r = gr.Radio(
887
+ label=f"Question {i+1}",
888
+ visible=False,
889
+ elem_classes="quiz-question",
890
+ )
891
+ q_radios.append(r)
892
+
893
+ # Feedback (Markdown/HTML)
894
+ fb = gr.HTML(visible=False)
895
+ q_feedbacks.append(fb)
896
+
897
+ check_btn = gr.Button(
898
+ "✅ Check Answers", variant="primary", visible=False
899
+ )
900
+
901
+ # Final Result Message
902
+ quiz_result_msg = gr.Markdown(visible=False)
903
+
904
+ # ------------------------------------------------------------------
905
+ # Events
906
+ # ------------------------------------------------------------------
907
+
908
+ # 1. Run Agent
909
+ # Returns: [Summary, QuizGroup, QuizState]
910
+ submit_btn.click(
911
+ fn=run_agent,
912
+ inputs=[file_input, user_input],
913
+ outputs=[summary_output, quiz_group, quiz_state],
914
+ ).success(
915
+ # On success, update the quiz UI components (21 items)
916
+ fn=render_quiz,
917
+ inputs=[quiz_state],
918
+ outputs=q_radios + q_feedbacks + [check_btn],
919
+ )
920
+
921
+ # Example Buttons Handling
922
+ # These will ONLY fill the text box. User must click 'Run' manually.
923
+
924
+ ex_summarize.click(
925
+ fn=lambda: "Summarize this document strictly in exactly 3 lines.",
926
+ outputs=[user_input],
927
+ )
928
+
929
+ ex_quiz.click(
930
+ fn=lambda: "Generate a quiz with exactly 3 multiple-choice questions.",
931
+ outputs=[user_input],
932
+ )
933
+
934
+ ex_explain.click(
935
+ fn=lambda: "Explain the 5 most important core concepts in this document clearly.",
936
+ outputs=[user_input],
937
+ )
938
+
939
+ # 2. Check Answers
940
+ # Inputs: State + 10 Radios
941
+ # Outputs: 10 Radios (Lock) + 10 Feedbacks (Show) + ResultMsg
942
+ check_btn.click(
943
+ fn=grade_quiz_ui,
944
+ inputs=[quiz_state] + q_radios,
945
+ outputs=q_radios + q_feedbacks + [quiz_result_msg],
946
+ )
947
+
948
+ # 3. Clear
949
+ def reset_ui():
950
+ # Reset everything to default
951
+ updates = [
952
+ gr.update(value=None, interactive=True, visible=False)
953
+ ] * 10 # Radios
954
+ fb_updates = [gr.update(value="", visible=False)] * 10 # Feedbacks
955
+ return (
956
+ None,
957
+ "", # Inputs
958
+ gr.update(value="", visible=True), # Summary
959
+ gr.update(visible=False), # Quiz Group
960
+ None, # State
961
+ *updates,
962
+ *fb_updates, # Radios + Feedbacks
963
+ gr.update(visible=False), # CheckBtn
964
+ gr.update(visible=False), # ResultMsg
965
+ )
966
+
967
+ clear_btn.click(
968
+ fn=reset_ui,
969
+ inputs=[],
970
+ outputs=[file_input, user_input, summary_output, quiz_group, quiz_state]
971
+ + q_radios
972
+ + q_feedbacks
973
+ + [check_btn, quiz_result_msg],
974
+ )
975
+
976
+ if __name__ == "__main__":
977
+ print("Starting SmartTutor AI...")
978
+ demo.launch(share=False)
architecture.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:979be7256d4949702bc8e598e555162c4c29ef5b22df58fb386963bf8ef745fe
3
+ size 173342
quizzes_db.json ADDED
@@ -0,0 +1,253 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bc0a0de5-7257-4648-991f-dc148c41c4e2": {
3
+ "file_path": "C:\\Users\\Yaz00\\AppData\\Local\\Temp\\gradio\\cbb1f0b598874cdd2d33694a480d4a837fc795b1ef2c35bf341accce66d00612\\AISA 3 2.pdf",
4
+ "questions": [
5
+ {
6
+ "qid": "1",
7
+ "question": "What does AISA stand for?",
8
+ "options": {
9
+ "A": "Agentic Intelligence Systems Architecture",
10
+ "B": "Agentic AI Systems Architecture",
11
+ "C": "Autonomous Intelligent Systems Architecture",
12
+ "D": "Advanced Intelligent Systems Architecture"
13
+ },
14
+ "correct": "B",
15
+ "explanation": "",
16
+ "supporting_context": ""
17
+ },
18
+ {
19
+ "qid": "2",
20
+ "question": "What is a key feature of agentic AI systems according to the document?",
21
+ "options": {
22
+ "A": "Autonomous reasoning",
23
+ "B": "Limited interaction",
24
+ "C": "Static planning",
25
+ "D": "Manual reasoning"
26
+ },
27
+ "correct": "A",
28
+ "explanation": "",
29
+ "supporting_context": ""
30
+ },
31
+ {
32
+ "qid": "3",
33
+ "question": "What does the AISA framework aim to provide?",
34
+ "options": {
35
+ "A": "A fragmented approach to AI development",
36
+ "B": "A focus on ad hoc development",
37
+ "C": "A single-layered AI architecture",
38
+ "D": "A unified architectural framework for agentic AI systems"
39
+ },
40
+ "correct": "D",
41
+ "explanation": "",
42
+ "supporting_context": ""
43
+ }
44
+ ]
45
+ },
46
+ "ee196ded-ed87-45d9-90d7-3a82ca14808e": {
47
+ "file_path": "C:\\Users\\Yaz00\\AppData\\Local\\Temp\\gradio\\cbb1f0b598874cdd2d33694a480d4a837fc795b1ef2c35bf341accce66d00612\\AISA 3 2.pdf",
48
+ "questions": [
49
+ {
50
+ "qid": "1",
51
+ "question": "What does AISA stand for?",
52
+ "options": {
53
+ "A": "Advanced Intelligent Systems Architecture",
54
+ "B": "Artificial Intelligence Systems Architecture",
55
+ "C": "Autonomous Intelligent Systems Architecture",
56
+ "D": "Agentic AI Systems Architecture"
57
+ },
58
+ "correct": "D",
59
+ "explanation": "",
60
+ "supporting_context": ""
61
+ },
62
+ {
63
+ "qid": "2",
64
+ "question": "What is one of the main focuses of the AISA framework?",
65
+ "options": {
66
+ "A": "Simplifying AI models",
67
+ "B": "Reducing AI costs",
68
+ "C": "Integration of reasoning and infrastructure",
69
+ "D": "Fragmented development of AI systems"
70
+ },
71
+ "correct": "C",
72
+ "explanation": "",
73
+ "supporting_context": ""
74
+ },
75
+ {
76
+ "qid": "3",
77
+ "question": "When was the AISA paper published?",
78
+ "options": {
79
+ "A": "December 31, 2025",
80
+ "B": "January 1, 2025",
81
+ "C": "January 6, 2026",
82
+ "D": "February 1, 2026"
83
+ },
84
+ "correct": "C",
85
+ "explanation": "",
86
+ "supporting_context": ""
87
+ }
88
+ ]
89
+ },
90
+ "3a62fe3b-63ba-46a0-a0c4-09d261e70551": {
91
+ "file_path": "C:\\Users\\Yaz00\\AppData\\Local\\Temp\\gradio\\cbb1f0b598874cdd2d33694a480d4a837fc795b1ef2c35bf341accce66d00612\\AISA 3 2.pdf",
92
+ "questions": [
93
+ {
94
+ "qid": "1",
95
+ "question": "What does AISA stand for?",
96
+ "options": {
97
+ "A": "Agentic Intelligent Systems Architecture",
98
+ "B": "Advanced Intelligent Systems Architecture",
99
+ "C": "Autonomous Intelligent Systems Architecture",
100
+ "D": "Agentic AI Systems Architecture"
101
+ },
102
+ "correct": "D",
103
+ "explanation": "",
104
+ "supporting_context": ""
105
+ },
106
+ {
107
+ "qid": "2",
108
+ "question": "What is one of the main focuses of the AISA framework?",
109
+ "options": {
110
+ "A": "Limited interaction with environments",
111
+ "B": "Fragmented development of AI systems",
112
+ "C": "Ethical oversight and governance",
113
+ "D": "Simplified tool execution"
114
+ },
115
+ "correct": "C",
116
+ "explanation": "",
117
+ "supporting_context": ""
118
+ },
119
+ {
120
+ "qid": "3",
121
+ "question": "What does the AISA framework aim to unify?",
122
+ "options": {
123
+ "A": "Cognitive agent design and tool execution",
124
+ "B": "Hardware and software integration",
125
+ "C": "Data collection and analysis",
126
+ "D": "User interface design and user experience"
127
+ },
128
+ "correct": "A",
129
+ "explanation": "",
130
+ "supporting_context": ""
131
+ }
132
+ ]
133
+ },
134
+ "f652d60d-12fe-4641-95ba-219b6a24fd2b": {
135
+ "file_path": "C:\\Users\\Yaz00\\AppData\\Local\\Temp\\gradio\\cbb1f0b598874cdd2d33694a480d4a837fc795b1ef2c35bf341accce66d00612\\AISA 3 2.pdf",
136
+ "questions": [
137
+ {
138
+ "qid": "1",
139
+ "question": "What does AISA stand for?",
140
+ "options": {
141
+ "A": "Agentic AI Systems Architecture",
142
+ "B": "Agentic Intelligent Systems Architecture",
143
+ "C": "Autonomous Intelligent Systems Architecture",
144
+ "D": "Advanced Intelligent Systems Architecture"
145
+ },
146
+ "correct": "C",
147
+ "explanation": "",
148
+ "supporting_context": ""
149
+ },
150
+ {
151
+ "qid": "2",
152
+ "question": "What is one of the main focuses of the AISA framework?",
153
+ "options": {
154
+ "A": "Simplified tool execution",
155
+ "B": "Fragmented development of AI systems",
156
+ "C": "Ethical oversight and governance",
157
+ "D": "Limited interaction with environments"
158
+ },
159
+ "correct": "B",
160
+ "explanation": "",
161
+ "supporting_context": ""
162
+ },
163
+ {
164
+ "qid": "3",
165
+ "question": "What does the AISA framework aim to unify?",
166
+ "options": {
167
+ "A": "Hardware and software integration",
168
+ "B": "Cognitive agent design and tool execution",
169
+ "C": "Data collection and analysis",
170
+ "D": "User interface design and user experience"
171
+ },
172
+ "correct": "B",
173
+ "explanation": "",
174
+ "supporting_context": ""
175
+ }
176
+ ]
177
+ },
178
+ "d39aeec6-d084-4188-b079-ec74771cb31f": {
179
+ "file_path": "C:\\Users\\Yaz00\\AppData\\Local\\Temp\\gradio\\cbb1f0b598874cdd2d33694a480d4a837fc795b1ef2c35bf341accce66d00612\\AISA 3 2.pdf",
180
+ "questions": [
181
+ {
182
+ "qid": "1",
183
+ "question": "What is the primary focus of the AISA architecture?",
184
+ "options": {
185
+ "A": "Multi-agent systems",
186
+ "B": "Single-agent systems",
187
+ "C": "Data analysis",
188
+ "D": "Network security"
189
+ },
190
+ "correct": "A",
191
+ "explanation": "",
192
+ "supporting_context": ""
193
+ },
194
+ {
195
+ "qid": "2",
196
+ "question": "Which layer ensures alignment with human and institutional values in the AISA architecture?",
197
+ "options": {
198
+ "A": "Agentic Infrastructure Layer",
199
+ "B": "Cognitive Agent Layer",
200
+ "C": "Evaluation & Feedback Layer",
201
+ "D": "LLM Foundation Layer"
202
+ },
203
+ "correct": "C",
204
+ "explanation": "",
205
+ "supporting_context": ""
206
+ }
207
+ ]
208
+ },
209
+ "4dd3612d-9e7e-489a-8ba2-f53f9f6320d3": {
210
+ "file_path": "C:\\Users\\Yaz00\\AppData\\Local\\Temp\\gradio\\cbb1f0b598874cdd2d33694a480d4a837fc795b1ef2c35bf341accce66d00612\\AISA 3 2.pdf",
211
+ "questions": [
212
+ {
213
+ "qid": "1",
214
+ "question": "What does AISA stand for?",
215
+ "options": {
216
+ "A": "Autonomous Intelligent Systems Architecture",
217
+ "B": "Advanced Intelligent Systems Architecture",
218
+ "C": "Agentic Intelligent Systems Architecture",
219
+ "D": "Agentic AI Systems Architecture"
220
+ },
221
+ "correct": "D",
222
+ "explanation": "",
223
+ "supporting_context": ""
224
+ },
225
+ {
226
+ "qid": "2",
227
+ "question": "What is one of the main focuses of the AISA framework?",
228
+ "options": {
229
+ "A": "Simplified tool execution",
230
+ "B": "Ethical oversight and governance",
231
+ "C": "Fragmented development of AI systems",
232
+ "D": "Reducing AI capabilities"
233
+ },
234
+ "correct": "B",
235
+ "explanation": "",
236
+ "supporting_context": ""
237
+ },
238
+ {
239
+ "qid": "3",
240
+ "question": "When was the AISA framework introduced?",
241
+ "options": {
242
+ "A": "December 31, 2025",
243
+ "B": "February 1, 2026",
244
+ "C": "January 1, 2025",
245
+ "D": "January 6, 2026"
246
+ },
247
+ "correct": "D",
248
+ "explanation": "",
249
+ "supporting_context": ""
250
+ }
251
+ ]
252
+ }
253
+ }
requirements.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ crewai>=0.1.0
2
+ gradio>=4.0.0
3
+ python-dotenv>=1.0.0
4
+ pymupdf>=1.23.0
5
+ pydantic>=2.0.0
smart_tutor_core.py ADDED
@@ -0,0 +1,677 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os, json, re, random
2
+ import uuid
3
+ import time
4
+ import logging
5
+ from typing import Literal, List, Dict, Any, Optional
6
+
7
+ from pydantic import BaseModel, Field, ValidationError
8
+ from crewai import Agent, Task, Crew, Process
9
+ from crewai.tools import tool
10
+ from crewai.llm import LLM
11
+
12
+ import dotenv
13
+
14
+ dotenv.load_dotenv(
15
+ r"C:\Users\Yaz00\OneDrive\سطح المكتب\Agent AI - Tuwaiq\week 5\Homework 1\api_key.env"
16
+ )
17
+
18
+ # ============================================================
19
+ # Guardrails: logging, retries, deterministic config
20
+ # ============================================================
21
+
22
+ logging.basicConfig(
23
+ level=logging.INFO,
24
+ format="%(asctime)s | %(levelname)s | %(message)s",
25
+ )
26
+ logger = logging.getLogger("smart_tutor_guardrails")
27
+
28
+ DETERMINISTIC_TEMPERATURE = float(os.getenv("DETERMINISTIC_TEMPERATURE", "0.1"))
29
+ TOOL_MAX_RETRIES = int(os.getenv("TOOL_MAX_RETRIES", "2"))
30
+
31
+ # ============================================================
32
+ # Guardrails: rate limits / timeouts / policies
33
+ # ============================================================
34
+
35
+ MAX_FILE_SIZE_MB = int(os.getenv("MAX_FILE_SIZE_MB", "500"))
36
+ MAX_PDF_PAGES = int(os.getenv("MAX_PDF_PAGES", "2000"))
37
+ PDF_EXTRACTION_TIMEOUT = float(os.getenv("PDF_EXTRACTION_TIMEOUT", "200")) # seconds
38
+
39
+ ALLOWED_TOOLS = {"process_file", "store_quiz", "grade_quiz"}
40
+
41
+ PROMPT_INJECTION_PATTERNS = [
42
+ "ignore previous instructions",
43
+ "ignore all previous instructions",
44
+ "system:",
45
+ "assistant:",
46
+ "developer:",
47
+ "act as",
48
+ "you must",
49
+ "follow these instructions",
50
+ "override",
51
+ ]
52
+
53
+ # ============================================================
54
+ # Helpers
55
+ # ============================================================
56
+
57
+
58
+ def clean_text(text: str) -> str:
59
+ text = text.replace("\x00", " ")
60
+ text = re.sub(r"[ \t]+", " ", text)
61
+ text = re.sub(r"\n{3,}", "\n\n", text)
62
+ return text.strip()
63
+
64
+
65
+ def detect_prompt_injection(text: str) -> bool:
66
+ lower = text.lower()
67
+ return any(p in lower for p in PROMPT_INJECTION_PATTERNS)
68
+
69
+
70
+ def chunk_text(text: str, max_chars: int = 1200, overlap: int = 150) -> List[str]:
71
+ text = clean_text(text)
72
+ if not text:
73
+ return []
74
+ chunks = []
75
+ start = 0
76
+ n = len(text)
77
+ while start < n:
78
+ end = min(start + max_chars, n)
79
+ part = text[start:end].strip()
80
+ if part:
81
+ chunks.append(part)
82
+ if end == n:
83
+ break
84
+ start = max(0, end - overlap)
85
+ return chunks
86
+
87
+
88
+ def keyword_retrieve(chunks: List[str], query: str, top_k: int) -> List[str]:
89
+ q_terms = [w for w in re.findall(r"\w+", query.lower()) if len(w) > 2]
90
+
91
+ def score(c: str) -> int:
92
+ c_l = c.lower()
93
+ return sum(1 for t in q_terms if t in c_l)
94
+
95
+ ranked = sorted(chunks, key=score, reverse=True)
96
+ return [c for c in ranked[:top_k] if c]
97
+
98
+
99
+ # ============================================================
100
+ # File extraction with limits + timeout
101
+ # ============================================================
102
+
103
+
104
+ def extract_text(file_path: str) -> str:
105
+ if os.path.getsize(file_path) > MAX_FILE_SIZE_MB * 1024 * 1024:
106
+ raise ValueError(f"File too large (> {MAX_FILE_SIZE_MB} MB)")
107
+
108
+ ext = os.path.splitext(file_path)[1].lower()
109
+
110
+ if ext == ".txt":
111
+ with open(file_path, "r", encoding="utf-8", errors="ignore") as f:
112
+ return f.read()
113
+
114
+ if ext == ".pdf":
115
+ import fitz # PyMuPDF
116
+
117
+ start_time = time.time()
118
+ doc = fitz.open(file_path)
119
+
120
+ if len(doc) > MAX_PDF_PAGES:
121
+ raise ValueError(f"PDF exceeds max page limit ({MAX_PDF_PAGES})")
122
+
123
+ parts = []
124
+ for i in range(len(doc)):
125
+ if time.time() - start_time > PDF_EXTRACTION_TIMEOUT:
126
+ raise TimeoutError("PDF extraction timeout")
127
+ t = doc.load_page(i).get_text("text") or ""
128
+ t = clean_text(t)
129
+ if t:
130
+ parts.append(t)
131
+ return "\n\n".join(parts).strip()
132
+
133
+ raise ValueError("Unsupported file type (PDF/TXT only).")
134
+
135
+
136
+ # ============================================================
137
+ # Schemas (Structured Inputs / Outputs)
138
+ # ============================================================
139
+
140
+
141
+ class ProcessArgs(BaseModel):
142
+ file_path: str = Field(..., description="Local path to PDF/TXT")
143
+ query: str = Field(..., description="User question or instruction")
144
+ mode: Literal["summarize", "quiz", "explain"] = Field(..., description="Task type")
145
+ top_k: int = Field(6, ge=1, le=15, description="How many chunks to use as context")
146
+
147
+
148
+ class QuizQuestion(BaseModel):
149
+ qid: str
150
+ question: str
151
+ options: Dict[Literal["A", "B", "C", "D"], str]
152
+ correct: Literal["A", "B", "C", "D"]
153
+ explanation: str = ""
154
+ supporting_context: str = ""
155
+
156
+
157
+ class StoreQuizArgs(BaseModel):
158
+ file_path: str = Field(
159
+ ..., description="The absolute file path of the document used"
160
+ )
161
+ questions: List[QuizQuestion]
162
+
163
+
164
+ class GradeQuizArgs(BaseModel):
165
+ quiz_id: str
166
+ answers: Dict[str, Literal["A", "B", "C", "D"]]
167
+
168
+
169
+ class ToolError(BaseModel):
170
+ error: str
171
+ details: Optional[Any] = None
172
+
173
+
174
+ class ProcessFileResult(BaseModel):
175
+ mode: str
176
+ query: str
177
+ context_chunks: List[str]
178
+ stats: Dict[str, Any]
179
+
180
+
181
+ class StoreQuizResult(BaseModel):
182
+ quiz_id: str
183
+ questions: List[Dict[str, Any]] # masked questions
184
+
185
+
186
+ class GradeQuizResult(BaseModel):
187
+ quiz_id: str
188
+ score: int
189
+ total: int
190
+ percentage: float
191
+ file_path: Optional[str] = None
192
+ details: List[Dict[str, Any]]
193
+
194
+
195
+ # ============================================================
196
+ # Memory/State with Persistence
197
+ # ============================================================
198
+
199
+ QUIZ_FILE = "quizzes_db.json"
200
+
201
+
202
+ def load_quizzes():
203
+ if os.path.exists(QUIZ_FILE):
204
+ try:
205
+ with open(QUIZ_FILE, "r", encoding="utf-8") as f:
206
+ return json.load(f)
207
+ except:
208
+ return {}
209
+ return {}
210
+
211
+
212
+ def save_quizzes(data):
213
+ try:
214
+ with open(QUIZ_FILE, "w", encoding="utf-8") as f:
215
+ json.dump(data, f, ensure_ascii=False, indent=2)
216
+ except Exception as e:
217
+ logger.error(f"Failed to save quizzes: {e}")
218
+
219
+
220
+ QUIZ_STORE: Dict[str, Dict[str, Any]] = load_quizzes()
221
+
222
+
223
+ # ============================================================
224
+ # Tool wrapper: retries + logs + redaction
225
+ # ============================================================
226
+
227
+
228
+ def _redact(obj: Any) -> Any:
229
+ """Redact secrets + quiz answer key in logs."""
230
+ try:
231
+ if isinstance(obj, dict):
232
+ out = {}
233
+ for k, v in obj.items():
234
+ lk = str(k).lower()
235
+ if lk in {"openai_api_key", "api_key", "authorization", "x-api-key"}:
236
+ out[k] = "***"
237
+ elif lk == "correct":
238
+ out[k] = "***"
239
+ else:
240
+ out[k] = _redact(v)
241
+ return out
242
+ if isinstance(obj, list):
243
+ return [_redact(x) for x in obj]
244
+ if isinstance(obj, str):
245
+ key = os.getenv("OPENAI_API_KEY") or ""
246
+ if key and key in obj:
247
+ return obj.replace(key, "***")
248
+ return obj
249
+ return obj
250
+ except Exception:
251
+ return "<redacted>"
252
+
253
+
254
+ def safe_tool_call(tool_name: str, fn):
255
+ if tool_name not in ALLOWED_TOOLS:
256
+ raise RuntimeError("Tool not allowed by policy")
257
+
258
+ last_err = None
259
+ for attempt in range(1, TOOL_MAX_RETRIES + 2):
260
+ try:
261
+ logger.info(f"[TOOL_CALL] {tool_name} attempt={attempt}")
262
+ out = fn()
263
+ logger.info(
264
+ f"[TOOL_RESULT] {tool_name} attempt={attempt} out={json.dumps(_redact(out), ensure_ascii=False)[:900]}"
265
+ )
266
+ return out
267
+ except Exception as e:
268
+ last_err = e
269
+ logger.warning(
270
+ f"[TOOL_ERROR] {tool_name} attempt={attempt} err={type(e).__name__}"
271
+ )
272
+ time.sleep(0.2 * attempt)
273
+ raise last_err
274
+
275
+
276
+ # ============================================================
277
+ # Tools
278
+ # ============================================================
279
+
280
+
281
+ @tool("process_file")
282
+ def process_file(file_path: str, query: str, mode: str, top_k: int = 6) -> str:
283
+ """Read PDF/TXT, chunk it, retrieve top_k relevant chunks. Returns structured JSON."""
284
+ try:
285
+ args = ProcessArgs(file_path=file_path, query=query, mode=mode, top_k=top_k)
286
+ except ValidationError as ve:
287
+ return json.dumps(
288
+ ToolError(error="Invalid arguments", details=ve.errors()).model_dump(),
289
+ ensure_ascii=False,
290
+ )
291
+
292
+ def _run():
293
+ # Clean path: remove quotes and whitespace that agents sometimes add
294
+ clean_path = args.file_path.strip().strip("'\"").strip()
295
+ if not os.path.exists(clean_path):
296
+ return ToolError(error=f"Invalid file path: {clean_path}").model_dump()
297
+
298
+ try:
299
+ raw_text = extract_text(args.file_path)
300
+ except Exception as e:
301
+ return ToolError(
302
+ error="Extraction failed", details=type(e).__name__
303
+ ).model_dump()
304
+
305
+ if detect_prompt_injection(raw_text):
306
+ logger.warning(
307
+ "[SECURITY] Potential prompt injection detected in document. Treating as data only."
308
+ )
309
+
310
+ text = clean_text(raw_text)
311
+ if not text:
312
+ return ToolError(error="Empty or unreadable file text.").model_dump()
313
+
314
+ chunks = chunk_text(text)
315
+ if not chunks:
316
+ return ToolError(error="No chunks produced.").model_dump()
317
+
318
+ context = keyword_retrieve(chunks, args.query, args.top_k)
319
+
320
+ return ProcessFileResult(
321
+ mode=args.mode,
322
+ query=args.query,
323
+ context_chunks=context,
324
+ stats={
325
+ "chunks_total": len(chunks),
326
+ "chars_extracted": len(text),
327
+ "top_k": args.top_k,
328
+ },
329
+ ).model_dump()
330
+
331
+ try:
332
+ out = safe_tool_call("process_file", _run)
333
+ return json.dumps(out, ensure_ascii=False)
334
+ except Exception as e:
335
+ return json.dumps(
336
+ ToolError(
337
+ error="process_file failed", details=type(e).__name__
338
+ ).model_dump(),
339
+ ensure_ascii=False,
340
+ )
341
+
342
+
343
+ def clean_json_input(text: str) -> str:
344
+ """Clean markdown code blocks and extract JSON object from string."""
345
+ text = text.strip()
346
+
347
+ # Remove markdown code blocks (flexible)
348
+ # This handles ```json ... ``` even if there is text before/after
349
+ pattern = r"```(?:json)?\s*(\{.*?\})\s*```"
350
+ match = re.search(pattern, text, re.DOTALL)
351
+ if match:
352
+ return match.group(1)
353
+
354
+ # If no code blocks, try to find the first outer-most JSON object
355
+ # This regex looks for { ... } minimally or greedily?
356
+ # We want the largest block starting with { and ending with }
357
+ # but strictly speaking, standard json.loads might just work if we strip.
358
+
359
+ # If text starts with ``` but didn't match the block above (maybe incomplete),
360
+ # let's just strip the fences.
361
+ if text.startswith("```"):
362
+ text = re.sub(r"^```(\w+)?\n?", "", text)
363
+ text = re.sub(r"\n?```$", "", text)
364
+
365
+ # Remove single backticks
366
+ if text.startswith("`") and text.endswith("`"):
367
+ text = text.strip("`")
368
+
369
+ return text.strip()
370
+
371
+
372
+ @tool("store_quiz")
373
+ def store_quiz(quiz_package_json: str) -> str:
374
+ """Store quiz with hidden answers; return masked quiz (no correct answers)."""
375
+
376
+ def _run():
377
+ try:
378
+ cleaned_json = clean_json_input(quiz_package_json)
379
+ # First try: direct parse
380
+ pkg_raw = json.loads(cleaned_json)
381
+ except json.JSONDecodeError:
382
+ # Second try: liberal regex search for { ... }
383
+ # Use dotall and greedy to capture nested objects
384
+ match = re.search(r"(\{.*\})", quiz_package_json, re.DOTALL)
385
+ if match:
386
+ try:
387
+ pkg_raw = json.loads(match.group(1))
388
+ except json.JSONDecodeError as e:
389
+ return ToolError(
390
+ error=f"quiz_package_json is not valid JSON. Parse error: {str(e)}",
391
+ details=f"Input fragment: {quiz_package_json[:200]}...",
392
+ ).model_dump()
393
+ else:
394
+ return ToolError(
395
+ error="quiz_package_json is not valid JSON (no braces found)",
396
+ details=f"Input fragment: {quiz_package_json[:200]}...",
397
+ ).model_dump()
398
+
399
+ try:
400
+ pkg = StoreQuizArgs(**pkg_raw)
401
+ except ValidationError as ve:
402
+ return ToolError(
403
+ error="Invalid quiz_package_json", details=ve.errors()
404
+ ).model_dump()
405
+
406
+ quiz_id = str(uuid.uuid4())
407
+
408
+ # Randomize options for each question
409
+ final_questions = []
410
+ for q in pkg.questions:
411
+ # q is a QuizQuestion object
412
+ original_options = q.options # dict e.g. {"A": "...", "B": "..."}
413
+ original_correct_key = q.correct # "A"
414
+ correct_text = original_options[original_correct_key]
415
+
416
+ # Extract texts
417
+ option_texts = list(original_options.values())
418
+ random.shuffle(option_texts)
419
+
420
+ # Re-map to A, B, C, D
421
+ new_options = {}
422
+ new_correct_key = ""
423
+ keys = ["A", "B", "C", "D"]
424
+
425
+ # Handle cases with fewer than 4 options just in case
426
+ for i, text in enumerate(option_texts):
427
+ if i < len(keys):
428
+ key = keys[i]
429
+ new_options[key] = text
430
+ if text == correct_text:
431
+ new_correct_key = key
432
+
433
+ # Update the question object (create a copy/dict)
434
+ q_dump = q.model_dump()
435
+ q_dump["options"] = new_options
436
+ q_dump["correct"] = new_correct_key
437
+ final_questions.append(q_dump)
438
+
439
+ QUIZ_STORE[quiz_id] = {
440
+ "file_path": pkg.file_path,
441
+ "questions": final_questions,
442
+ }
443
+ save_quizzes(QUIZ_STORE)
444
+
445
+ masked = [
446
+ {"qid": q["qid"], "question": q["question"], "options": q["options"]}
447
+ for q in final_questions
448
+ ]
449
+ return StoreQuizResult(quiz_id=quiz_id, questions=masked).model_dump()
450
+
451
+ try:
452
+ out = safe_tool_call("store_quiz", _run)
453
+ return json.dumps(out, ensure_ascii=False)
454
+ except Exception as e:
455
+ return json.dumps(
456
+ ToolError(error="store_quiz failed", details=type(e).__name__).model_dump(),
457
+ ensure_ascii=False,
458
+ )
459
+
460
+
461
+ @tool("grade_quiz")
462
+ def grade_quiz(quiz_id: str, answers_json: str) -> str:
463
+ """Grade quiz answers by quiz_id and answers_json. Returns score + details as structured JSON.
464
+ Also returns 'file_path' and 'question' text for further processing."""
465
+
466
+ def _run():
467
+ if quiz_id not in QUIZ_STORE:
468
+ return ToolError(error="Unknown quiz_id.").model_dump()
469
+
470
+ try:
471
+ cleaned_json = clean_json_input(answers_json)
472
+ submitted_raw = json.loads(cleaned_json)
473
+ except json.JSONDecodeError:
474
+ # Fallback
475
+ match = re.search(r"(\{.*\})", answers_json, re.DOTALL)
476
+ if match:
477
+ try:
478
+ submitted_raw = json.loads(match.group(1))
479
+ except:
480
+ return ToolError(
481
+ error="answers_json is not valid JSON"
482
+ ).model_dump()
483
+ else:
484
+ return ToolError(error="answers_json is not valid JSON").model_dump()
485
+
486
+ try:
487
+ args = GradeQuizArgs(quiz_id=quiz_id, answers=submitted_raw)
488
+ except ValidationError as ve:
489
+ return ToolError(
490
+ error="Invalid answers_json", details=ve.errors()
491
+ ).model_dump()
492
+
493
+ stored_data = QUIZ_STORE[args.quiz_id]
494
+ questions = stored_data["questions"]
495
+ file_path = stored_data.get("file_path")
496
+
497
+ total = len(questions)
498
+ score = 0
499
+ details = []
500
+
501
+ for q in questions:
502
+ qid = q["qid"]
503
+ correct = q["correct"]
504
+ question_text = q.get("question", "")
505
+
506
+ your = (args.answers.get(qid) or "").strip().upper()
507
+ is_correct = your == correct
508
+ score += 1 if is_correct else 0
509
+
510
+ details.append(
511
+ {
512
+ "qid": qid,
513
+ "question": question_text, # Added for Agent context
514
+ "is_correct": is_correct,
515
+ "your_answer": your,
516
+ "correct_answer": correct, # NOTE: returned to tutor; OK for feedback
517
+ "explanation": q.get("explanation", "") or "",
518
+ "supporting_context": q.get("supporting_context", "") or "",
519
+ }
520
+ )
521
+
522
+ percentage = round((score / total) * 100, 2) if total else 0.0
523
+
524
+ return GradeQuizResult(
525
+ quiz_id=args.quiz_id,
526
+ score=score,
527
+ total=total,
528
+ percentage=percentage,
529
+ file_path=file_path,
530
+ details=details,
531
+ ).model_dump()
532
+
533
+ try:
534
+ out = safe_tool_call("grade_quiz", _run)
535
+ return json.dumps(out, ensure_ascii=False)
536
+ except Exception as e:
537
+ return json.dumps(
538
+ ToolError(error="grade_quiz failed", details=type(e).__name__).model_dump(),
539
+ ensure_ascii=False,
540
+ )
541
+
542
+
543
+ # ============================================================
544
+ # CrewAI setup
545
+ # ============================================================
546
+
547
+ llm = LLM(
548
+ model="gpt-4o-mini",
549
+ api_key=os.getenv("OPENAI_API_KEY"),
550
+ temperature=DETERMINISTIC_TEMPERATURE,
551
+ )
552
+
553
+ manager = Agent(
554
+ role="Manager (Router)",
555
+ goal=(
556
+ "Route user request to the correct specialist co-worker."
557
+ " Pass ALL user constraints (line count, "
558
+ "paragraph count, language, etc.) to the specialist."
559
+ ),
560
+ backstory=(
561
+ "You are a routing agent. You HAVE specialist co-workers: "
562
+ "Summarizer, Quiz Maker, and Tutor. "
563
+ "Your ONLY job is to delegate the task to the right co-worker "
564
+ "using your delegation tool. "
565
+ "NEVER answer the user yourself. NEVER use internal knowledge. "
566
+ "Always forward the FULL user request including any constraints."
567
+ ),
568
+ allow_delegation=True,
569
+ llm=llm,
570
+ verbose=True,
571
+ )
572
+
573
+ summarizer = Agent(
574
+ role="Summarizer",
575
+ goal=(
576
+ "Produce a summary grounded strictly in "
577
+ "context_chunks from process_file. STRICTLY "
578
+ "follow any user constraints on length, "
579
+ "number of lines, paragraphs, or format."
580
+ ),
581
+ backstory=(
582
+ "Call process_file(mode=summarize) first. "
583
+ "Summarize ONLY from context_chunks. "
584
+ "If the user specifies constraints like "
585
+ "'3 lines', '2 paragraphs', 'short', or "
586
+ "'detailed', you MUST follow them exactly. "
587
+ "Use bullet points (- or *) for lists instead of numbering. "
588
+ "No outside knowledge."
589
+ ),
590
+ tools=[process_file],
591
+ llm=llm,
592
+ verbose=True,
593
+ )
594
+
595
+ quizzer = Agent(
596
+ role="Quiz Maker",
597
+ goal="Generate EXACTLY the number of multiple-choice questions requested by the user, grounded strictly in process_file context.",
598
+ backstory=(
599
+ "STEP 1: Extract the EXACT number of questions from user request (e.g., '3 questions' = 3, default = 5).\n"
600
+ "STEP 2: Call process_file(mode=quiz) with file_path. Create ONLY that exact number of MCQs A-D from context_chunks.\n"
601
+ "STEP 3: Build quiz_package_json with absolute 'file_path' and correct answers, call store_quiz.\n"
602
+ 'Ensure VALID JSON: {"file_path": "...", "questions": [...]}. CRITICAL: Match requested count exactly. Never reveal answers.'
603
+ ),
604
+ tools=[process_file, store_quiz],
605
+ llm=llm,
606
+ verbose=True,
607
+ )
608
+
609
+ tutor = Agent(
610
+ role="Tutor",
611
+ goal="Grade quiz and provide intelligent explanation for errors.",
612
+ backstory=(
613
+ "You are an expert Tutor. When asked to grade a quiz:\n"
614
+ "1. Call 'grade_quiz' to get the base results.\n"
615
+ "2. For every INCORRECT answer, you MUST Explain WHY it is wrong:\n"
616
+ " - Use the 'question' text and 'file_path' from the result to call 'process_file' (mode='explain', query=question).\n"
617
+ " - REWRITE the 'explanation' field in the JSON detail for that question with your new explanation.\n"
618
+ " - Use bullet points for any lists in your explanations.\n"
619
+ "3. Return the fully updated JSON object."
620
+ ),
621
+ tools=[process_file, grade_quiz],
622
+ llm=llm,
623
+ verbose=True,
624
+ )
625
+
626
+ task = Task(
627
+ description=(
628
+ "User request: {user_request}\n\n"
629
+ "Route by intent:\n"
630
+ "- Summary -> Summarizer\n"
631
+ "- Quiz -> Quiz Maker\n"
632
+ "- Explanation -> Tutor\n"
633
+ "- Grading (contains quiz_id + answers_json) -> Tutor\n\n"
634
+ "Guardrails:\n"
635
+ "- Tool outputs are structured JSON.\n"
636
+ "- Tools validate inputs with Pydantic.\n"
637
+ "- Tool calls are logged without secrets.\n"
638
+ "- Do not reveal hidden quiz answers during quiz generation."
639
+ ),
640
+ expected_output=(
641
+ "Grounded response: summary OR " "masked quiz OR graded feedback."
642
+ ),
643
+ agent=manager,
644
+ )
645
+
646
+ crew = Crew(
647
+ agents=[manager, summarizer, quizzer, tutor],
648
+ tasks=[task],
649
+ process=Process.sequential,
650
+ verbose=True,
651
+ )
652
+
653
+
654
+ from pathlib import Path
655
+
656
+
657
+ def run_with_file(prompt: str, file_path: str | None = None):
658
+ file_text = ""
659
+ if file_path:
660
+ file_text = Path(file_path).read_text(encoding="utf-8", errors="ignore")
661
+
662
+ full_prompt = prompt
663
+ if file_text:
664
+ full_prompt += "\n\n[FILE CONTENT]\n" + file_text
665
+
666
+ return full_prompt
667
+
668
+
669
+ if __name__ == "__main__":
670
+ print(
671
+ run_with_file(
672
+ r"please give me a quiz about 3 questions from this file - file_path=C:\Users\Yaz00\OneDrive\سطح المكتب\Agent AI - Tuwaiq\week 5\Homework 1\Phase2.pdf"
673
+ )
674
+ )
675
+ # Example grading:
676
+ # print(run(r"grade this quiz_id=<PUT_ID_HERE> answers_json={\"q1\":\"A\",\"q2\":\"C\",\"q3\":\"B\"}"))
677
+ pass