Spaces:
Sleeping
Sleeping
File size: 10,231 Bytes
887eb10 b848ebd 887eb10 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 33ee492 0c6f454 33ee492 0c6f454 33ee492 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 217abc3 0c6f454 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 | ---
title: Quiz Generator V3
emoji: π
colorFrom: blue
colorTo: indigo
sdk: gradio
sdk_version: 5.32.1
app_file: app.py
pinned: false
license: apache-2.0
---
# AI Course Assessment Generator
An AI-powered tool that creates learning objectives and multiple-choice quiz questions from course materials. Supports both **automatic generation** from uploaded content and **manual entry** of learning objectives, producing fully enriched outputs with correct and incorrect answer suggestions ready for quiz generation.
---
## Features
### Tab 1 β Generate Learning Objectives
**Two modes of operation:**
- **Generate from course materials** β Upload course files and let the AI extract and generate learning objectives automatically through a multi-run, multi-stage pipeline.
- **Use my own learning objectives** β Enter your own learning objectives in a text field (one per line). The app searches the uploaded course materials for relevant source references, generates a correct answer for each objective, and produces incorrect answer options β the same full pipeline as automatic generation.
**Always-visible controls:**
- Mode selector (Generate / Use my own)
- Upload Course Materials
- Number of Learning Objectives per Run *(generate mode)* / Learning Objectives text field *(manual mode)*
- Generate Learning Objectives / Process Learning Objectives button
- Generate all button *(works in both modes)*
**Advanced Options** *(collapsible, closed by default):*
- Number of Generation Runs
- Model
- Model for Incorrect Answer Suggestions
- Temperature
**Shared capabilities (both modes):**
- All output in the same JSON format, ready to feed directly into Tab 2
- "Generate all" button runs the full end-to-end pipeline (learning objectives β quiz questions) in a single click, in either mode
### Tab 2 β Generate Questions
- Takes the learning objectives JSON produced in Tab 1 as input
- Generates multiple-choice questions with 4 options, per-option feedback, and source references
- Automatic ranking and grouping of generated questions by quality
- Outputs: ranked best-in-group questions, all grouped questions, and a human-readable formatted quiz
**Always-visible controls:**
- Learning Objectives JSON input
- Number of questions
- Generate Questions button
**Advanced Options** *(collapsible, closed by default):*
- Model
- Temperature
- Number of Question Generation Runs
### Tab 3 β Propose / Edit Question
- Load the formatted quiz from Tab 2 or upload a `.md` / `.yml` quiz file *(file upload is inside a collapsible section)*
- Review and edit questions one at a time with Previous / Accept & Next navigation
- Download the final edited quiz
---
## Generation Pipeline (Learning Objectives)
### Automatic generation mode
1. **Content extraction** β Uploads are parsed (`.vtt`, `.srt`, `.ipynb`, `.md`) and wrapped with XML source tags for full traceability
2. **Multi-run base generation** β Multiple independent runs produce candidate objectives (Bloom's taxonomy aware, one action verb, multiple-choice assessable)
3. **Correct answer generation** β A concise correct answer (~20 words) is generated for each objective from the course content
4. **Grouping & ranking** β Similar objectives are clustered; the best representative in each group is selected
5. **Incorrect answer generation** β Three plausible distractors are generated for each best-in-group objective, matching the correct answer in length, style, and complexity
6. **Iterative improvement** β Each distractor is evaluated and regenerated until it meets quality standards
### User-provided objectives mode
1. **Objective parsing** β Text is split by newlines; common leading labels are stripped automatically:
- Numbered: `1.`, `2)`, `3:`
- Lettered: `a.`, `b)`, `c:`
- Plain (no label)
2. **Source finding** β For each objective, the LLM searches the uploaded course materials to identify the most relevant source file(s)
3. **Correct answer generation** β Same function as the automatic flow, grounded in the course content
4. **Incorrect answer generation** β Same three-distractor generation as automatic flow
5. **Iterative improvement** β Same quality improvement loop
6. All objectives are treated as best-in-group (the user has already curated them), so no grouping/filtering step is applied
**Example accepted input formats:**
```
Identify key upstream and downstream collaborators for data engineers
Identify the stages of the data engineering lifecycle
Articulate a mental framework for building data engineering solutions
```
```
1. Identify key upstream and downstream collaborators for data engineers
2. Identify the stages of the data engineering lifecycle
3. Articulate a mental framework for building data engineering solutions
```
```
a. Identify key upstream and downstream collaborators for data engineers
b. Identify the stages of the data engineering lifecycle
c. Articulate a mental framework for building data engineering solutions
```
---
## Setup
### Prerequisites
- Python 3.12 (recommended) or 3.8+
- An OpenAI API key
### Installation
**Using uv (recommended):**
```bash
uv venv -p 3.12
source .venv/bin/activate # Windows: .venv\Scripts\activate
uv pip install -r requirements.txt
```
**Using pip:**
```bash
pip install -r requirements.txt
```
### Environment variables
Create a `.env` file in the project root:
```
OPENAI_API_KEY=your_api_key_here
```
---
## Running the app
```bash
python app.py
```
Opens the Gradio interface at [http://127.0.0.1:7860](http://127.0.0.1:7860).
---
## Supported file formats
| Format | Description |
|--------|-------------|
| `.vtt` | WebVTT subtitle files (timestamps stripped) |
| `.srt` | SRT subtitle files (timestamps stripped) |
| `.ipynb` | Jupyter notebooks (markdown and code cells extracted) |
| `.md` | Markdown files |
All content is wrapped with XML source tags (`<source file="filename">β¦</source>`) so every generated objective and question can be traced back to its origin file.
---
## Project structure
```
quiz_generator_ECM/
β
βββ app.py # Entry point β loads .env and launches Gradio
β
βββ models/ # Pydantic data models
β βββ learning_objectives.py # BaseLearningObjective β LearningObjective β Grouped*
β βββ questions.py # MultipleChoiceQuestion β Ranked* β Grouped*
β βββ assessment.py # Assessment (objectives + questions)
β βββ config.py # Model list and temperature availability map
β
βββ prompts/ # Reusable prompt components
β βββ learning_objectives.py # Bloom's taxonomy, quality standards, examples
β βββ incorrect_answers.py # Distractor guidelines and examples
β βββ questions.py # Question and answer quality standards
β βββ all_quality_standards.py # General quality standards
β
βββ learning_objective_generator/ # Learning objective pipeline
β βββ generator.py # LearningObjectiveGenerator orchestrator
β βββ base_generation.py # Base generation, correct answers, source finding
β βββ enhancement.py # Incorrect answer generation
β βββ grouping_and_ranking.py # Similarity grouping and best-in-group selection
β βββ suggestion_improvement.py # Iterative distractor quality improvement
β
βββ quiz_generator/ # Question generation pipeline
β βββ generator.py # QuizGenerator orchestrator
β βββ question_generation.py # Multiple-choice question generation
β βββ question_improvement.py # Question quality assessment and improvement
β βββ question_ranking.py # Ranking and grouping of questions
β βββ feedback_questions.py # Feedback-based question regeneration
β βββ assessment.py # Assessment compilation and export
β
βββ ui/ # Gradio interface and handlers
βββ app.py # UI layout, mode toggle, event wiring
βββ objective_handlers.py # Handlers for both objective modes + Generate all
βββ question_handlers.py # Question generation handler
βββ content_processor.py # File parsing and XML source tagging
βββ edit_handlers.py # Question editing flow (Tab 3)
βββ formatting.py # Quiz formatting for UI display
βββ state.py # Global state (file contents, objectives)
βββ run_manager.py # Run tracking and output saving
```
---
## Data models
Learning objectives progress through these stages:
```
BaseLearningObjectiveWithoutCorrectAnswer
ββ id, learning_objective, source_reference
β
BaseLearningObjective
ββ + correct_answer
β
LearningObjective (output of Tab 1, input to Tab 2)
ββ + incorrect_answer_options, in_group, group_members, best_in_group
```
Questions follow an equivalent progression:
```
MultipleChoiceQuestion
ββ id, question_text, options (text + is_correct + feedback),
learning_objective_id, correct_answer, source_reference
β
RankedMultipleChoiceQuestion
ββ + rank, ranking_reasoning, in_group, group_members, best_in_group
```
---
## Model configuration
Default model: `gpt-5.2`
Default temperature: `1.0` (ignored for models that do not support it, such as `o1`, `o3-mini`, `gpt-5`, `gpt-5.1`, `gpt-5.2`)
You can set different models for the main generation step and the incorrect answer suggestion step, which is useful for using a more creative model for distractors.
---
## Requirements
| Package | Version |
|---------|---------|
| Python | 3.8+ (3.12 recommended) |
| gradio | 4.19.2+ |
| pydantic | 2.8.0+ |
| openai | 1.52.0+ |
| nbformat | 5.9.2+ |
| instructor | 1.7.9+ |
| python-dotenv | 1.0.0+ |
|