Update README.md
Browse files
README.md
CHANGED
|
@@ -8,19 +8,21 @@ tags:
|
|
| 8 |
- mcq
|
| 9 |
- question-generation
|
| 10 |
- entrance-exam
|
|
|
|
|
|
|
| 11 |
- lora
|
| 12 |
- transformers
|
| 13 |
---
|
| 14 |
|
| 15 |
# Physics MCQ Generator
|
| 16 |
|
| 17 |
-
A fine-tuned language model that generates high-quality physics multiple-choice questions for university entrance exam preparation
|
| 18 |
|
| 19 |
## Model Details
|
| 20 |
|
| 21 |
### Model Description
|
| 22 |
|
| 23 |
-
This model is specifically designed to generate competitive physics multiple-choice questions with accurate content, plausible distractors, and appropriate difficulty levels for entrance exam preparation. It
|
| 24 |
|
| 25 |
- **Developed by:** [Your Name/Organization]
|
| 26 |
- **Model type:** Fine-tuned Causal Language Model
|
|
@@ -38,18 +40,20 @@ This model is specifically designed to generate competitive physics multiple-cho
|
|
| 38 |
### Direct Use
|
| 39 |
|
| 40 |
This model is intended for direct use in generating physics multiple-choice questions for:
|
| 41 |
-
- University entrance exam preparation
|
| 42 |
-
-
|
| 43 |
-
-
|
| 44 |
-
-
|
|
|
|
| 45 |
|
| 46 |
### Downstream Use
|
| 47 |
|
| 48 |
The model can be integrated into:
|
| 49 |
-
- Educational platforms
|
| 50 |
-
- Automated question bank generators
|
| 51 |
-
- Physics tutoring applications
|
| 52 |
-
- Exam preparation software
|
|
|
|
| 53 |
|
| 54 |
### Out-of-Scope Use
|
| 55 |
|
|
@@ -57,6 +61,7 @@ The model can be integrated into:
|
|
| 57 |
- Creating medical or safety-critical content
|
| 58 |
- Replacing human physics educators entirely
|
| 59 |
- Generating content outside physics domain
|
|
|
|
| 60 |
|
| 61 |
## Bias, Risks, and Limitations
|
| 62 |
|
|
@@ -65,11 +70,13 @@ The model can be integrated into:
|
|
| 65 |
- Generated questions should always be reviewed by subject matter experts
|
| 66 |
- Limited context length (~512 tokens) may affect complex question generation
|
| 67 |
- Training data primarily from international curriculum standards
|
|
|
|
| 68 |
|
| 69 |
### Risks
|
| 70 |
- Potential for generating incorrect physics concepts if prompted unusually
|
| 71 |
- May reflect biases present in the training data
|
| 72 |
- Should not be used for high-stakes assessment without human oversight
|
|
|
|
| 73 |
|
| 74 |
### Recommendations
|
| 75 |
Users should:
|
|
@@ -77,9 +84,12 @@ Users should:
|
|
| 77 |
- Use as a tool to assist educators, not replace them
|
| 78 |
- Disclose AI-generated content when used in educational materials
|
| 79 |
- Monitor and review outputs for accuracy and appropriateness
|
|
|
|
| 80 |
|
| 81 |
## How to Get Started with the Model
|
| 82 |
|
|
|
|
|
|
|
| 83 |
```python
|
| 84 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 85 |
from peft import PeftModel
|
|
@@ -96,13 +106,22 @@ model = PeftModel.from_pretrained(model, "your_username/physics-mcq-generator")
|
|
| 96 |
tokenizer = AutoTokenizer.from_pretrained("your_username/physics-mcq-generator")
|
| 97 |
tokenizer.pad_token = tokenizer.eos_token
|
| 98 |
|
| 99 |
-
|
| 100 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 101 |
prompt = f"""### Instruction:
|
| 102 |
Generate a multiple-choice question (MCQ) for a university entrance exam in Physics.
|
| 103 |
|
| 104 |
### Input:
|
| 105 |
-
Subject: Physics | Chapter: {chapter} | Topic: {topic} | Difficulty: {difficulty}
|
| 106 |
|
| 107 |
### Response:
|
| 108 |
Question:"""
|
|
@@ -120,6 +139,19 @@ Question:"""
|
|
| 120 |
|
| 121 |
return tokenizer.decode(outputs[0], skip_special_tokens=True)
|
| 122 |
|
| 123 |
-
#
|
| 124 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 125 |
print(mcq)
|
|
|
|
| 8 |
- mcq
|
| 9 |
- question-generation
|
| 10 |
- entrance-exam
|
| 11 |
+
- cognitive-skills
|
| 12 |
+
- bloom-taxonomy
|
| 13 |
- lora
|
| 14 |
- transformers
|
| 15 |
---
|
| 16 |
|
| 17 |
# Physics MCQ Generator
|
| 18 |
|
| 19 |
+
A fine-tuned language model that generates high-quality physics multiple-choice questions for university entrance exam preparation with customizable cognitive skill levels based on Bloom's Taxonomy.
|
| 20 |
|
| 21 |
## Model Details
|
| 22 |
|
| 23 |
### Model Description
|
| 24 |
|
| 25 |
+
This model is specifically designed to generate competitive physics multiple-choice questions with accurate content, plausible distractors, and appropriate difficulty levels for entrance exam preparation. It supports four cognitive skill levels (Recall, Application, Analysis, Evaluation) and excels across major physics domains including mechanics, electromagnetism, thermodynamics, optics, and modern physics.
|
| 26 |
|
| 27 |
- **Developed by:** [Your Name/Organization]
|
| 28 |
- **Model type:** Fine-tuned Causal Language Model
|
|
|
|
| 40 |
### Direct Use
|
| 41 |
|
| 42 |
This model is intended for direct use in generating physics multiple-choice questions for:
|
| 43 |
+
- University entrance exam preparation with varying cognitive levels
|
| 44 |
+
- Differentiated instruction materials
|
| 45 |
+
- Bloom's Taxonomy-aligned assessment creation
|
| 46 |
+
- Educational content creation across cognitive domains
|
| 47 |
+
- Tutoring and teaching assistance with skill-based questioning
|
| 48 |
|
| 49 |
### Downstream Use
|
| 50 |
|
| 51 |
The model can be integrated into:
|
| 52 |
+
- Educational platforms with adaptive learning paths
|
| 53 |
+
- Automated question bank generators with cognitive level filtering
|
| 54 |
+
- Physics tutoring applications with skill-based progression
|
| 55 |
+
- Exam preparation software with customized difficulty curves
|
| 56 |
+
- Teacher tools for creating balanced assessments
|
| 57 |
|
| 58 |
### Out-of-Scope Use
|
| 59 |
|
|
|
|
| 61 |
- Creating medical or safety-critical content
|
| 62 |
- Replacing human physics educators entirely
|
| 63 |
- Generating content outside physics domain
|
| 64 |
+
- Using for psychological or cognitive assessment
|
| 65 |
|
| 66 |
## Bias, Risks, and Limitations
|
| 67 |
|
|
|
|
| 70 |
- Generated questions should always be reviewed by subject matter experts
|
| 71 |
- Limited context length (~512 tokens) may affect complex question generation
|
| 72 |
- Training data primarily from international curriculum standards
|
| 73 |
+
- Cognitive skill differentiation may not be perfect for all topics
|
| 74 |
|
| 75 |
### Risks
|
| 76 |
- Potential for generating incorrect physics concepts if prompted unusually
|
| 77 |
- May reflect biases present in the training data
|
| 78 |
- Should not be used for high-stakes assessment without human oversight
|
| 79 |
+
- Cognitive level assignments may not always match intended complexity
|
| 80 |
|
| 81 |
### Recommendations
|
| 82 |
Users should:
|
|
|
|
| 84 |
- Use as a tool to assist educators, not replace them
|
| 85 |
- Disclose AI-generated content when used in educational materials
|
| 86 |
- Monitor and review outputs for accuracy and appropriateness
|
| 87 |
+
- Validate cognitive skill level assignments for important assessments
|
| 88 |
|
| 89 |
## How to Get Started with the Model
|
| 90 |
|
| 91 |
+
### Basic Usage with Cognitive Skills
|
| 92 |
+
|
| 93 |
```python
|
| 94 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 95 |
from peft import PeftModel
|
|
|
|
| 106 |
tokenizer = AutoTokenizer.from_pretrained("your_username/physics-mcq-generator")
|
| 107 |
tokenizer.pad_token = tokenizer.eos_token
|
| 108 |
|
| 109 |
+
def generate_physics_mcq(chapter, topic, difficulty="Medium", cognitive_skill="Application"):
|
| 110 |
+
"""
|
| 111 |
+
Generate a physics MCQ with customizable cognitive skill level
|
| 112 |
+
|
| 113 |
+
Cognitive Skill Levels:
|
| 114 |
+
- 'Recall': Basic fact recall and definition questions
|
| 115 |
+
- 'Application': Applying concepts to solve problems
|
| 116 |
+
- 'Analysis': Analyzing situations and relationships
|
| 117 |
+
- 'Evaluation': Complex reasoning and critical evaluation
|
| 118 |
+
"""
|
| 119 |
+
|
| 120 |
prompt = f"""### Instruction:
|
| 121 |
Generate a multiple-choice question (MCQ) for a university entrance exam in Physics.
|
| 122 |
|
| 123 |
### Input:
|
| 124 |
+
Subject: Physics | Chapter: {chapter} | Topic: {topic} | Difficulty: {difficulty} | Cognitive_Skill: {cognitive_skill}
|
| 125 |
|
| 126 |
### Response:
|
| 127 |
Question:"""
|
|
|
|
| 139 |
|
| 140 |
return tokenizer.decode(outputs[0], skip_special_tokens=True)
|
| 141 |
|
| 142 |
+
# Examples with different cognitive skills
|
| 143 |
+
print("🧠 Recall Question (Basic knowledge):")
|
| 144 |
+
mcq = generate_physics_mcq("Mechanics", "Newton's Laws", "Easy", "Recall")
|
| 145 |
+
print(mcq)
|
| 146 |
+
|
| 147 |
+
print("\n⚡ Application Question (Problem solving):")
|
| 148 |
+
mcq = generate_physics_mcq("Electromagnetism", "Ohm's Law", "Medium", "Application")
|
| 149 |
+
print(mcq)
|
| 150 |
+
|
| 151 |
+
print("\n🔍 Analysis Question (Complex reasoning):")
|
| 152 |
+
mcq = generate_physics_mcq("Thermodynamics", "First Law", "Hard", "Analysis")
|
| 153 |
+
print(mcq)
|
| 154 |
+
|
| 155 |
+
print("\n🎯 Evaluation Question (Critical thinking):")
|
| 156 |
+
mcq = generate_physics_mcq("Modern Physics", "Quantum Mechanics", "Hard", "Evaluation")
|
| 157 |
print(mcq)
|