File size: 5,540 Bytes
204be94 e32f38b 204be94 e32f38b 204be94 e32f38b 204be94 e32f38b 204be94 f536a90 e32f38b 204be94 e32f38b 204be94 f536a90 204be94 e32f38b 204be94 e32f38b 204be94 e32f38b 204be94 e32f38b 204be94 e32f38b 204be94 e32f38b 204be94 e32f38b 204be94 e32f38b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 |
---
base_model: microsoft/phi-2
library_name: peft
pipeline_tag: text-generation
tags:
- physics
- education
- mcq
- question-generation
- entrance-exam
- cognitive-skills
- bloom-taxonomy
- lora
- transformers
---
# Physics MCQ Generator
A fine-tuned language model that generates high-quality physics multiple-choice questions for university entrance exam preparation with customizable cognitive skill levels based on Bloom's Taxonomy.
## Model Details
### Model Description
This model is specifically designed to generate competitive physics multiple-choice questions with accurate content, plausible distractors, and appropriate difficulty levels for entrance exam preparation. It supports four cognitive skill levels (Recall, Application, Analysis, Evaluation) and excels across major physics domains including mechanics, electromagnetism, thermodynamics, optics, and modern physics.
- **Developed by:** [flanara]
- **Model type:** Fine-tuned Causal Language Model
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model:** microsoft/phi-2
### Model Sources
- **Repository:** https://huggingface.co/flanara/physics-mcq-generator
## Uses
### Direct Use
This model is intended for direct use in generating physics multiple-choice questions for:
- University entrance exam preparation with varying cognitive levels
- Differentiated instruction materials
- Bloom's Taxonomy-aligned assessment creation
- Educational content creation across cognitive domains
- Tutoring and teaching assistance with skill-based questioning
### Downstream Use
The model can be integrated into:
- Educational platforms with adaptive learning paths
- Automated question bank generators with cognitive level filtering
- Physics tutoring applications with skill-based progression
- Exam preparation software with customized difficulty curves
- Teacher tools for creating balanced assessments
### Out-of-Scope Use
- Generating questions for high-stakes exams without expert validation
- Creating medical or safety-critical content
- Replacing human physics educators entirely
- Generating content outside physics domain
- Using for psychological or cognitive assessment
## Bias, Risks, and Limitations
### Limitations
- Performance is best on classical physics topics; may struggle with advanced quantum mechanics
- Generated questions should always be reviewed by subject matter experts
- Limited context length (~512 tokens) may affect complex question generation
- Training data primarily from international curriculum standards
- Cognitive skill differentiation may not be perfect for all topics
### Risks
- Potential for generating incorrect physics concepts if prompted unusually
- May reflect biases present in the training data
- Should not be used for high-stakes assessment without human oversight
- Cognitive level assignments may not always match intended complexity
### Recommendations
Users should:
- Always verify generated questions with physics experts
- Use as a tool to assist educators, not replace them
- Disclose AI-generated content when used in educational materials
- Monitor and review outputs for accuracy and appropriateness
- Validate cognitive skill level assignments for important assessments
## How to Get Started with the Model
### Basic Usage with Cognitive Skills
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch
# Load the model
model = AutoModelForCausalLM.from_pretrained(
"microsoft/phi-2",
device_map="auto",
torch_dtype=torch.float16,
trust_remote_code=True
)
model = PeftModel.from_pretrained(model, "your_username/physics-mcq-generator")
tokenizer = AutoTokenizer.from_pretrained("your_username/physics-mcq-generator")
tokenizer.pad_token = tokenizer.eos_token
def generate_physics_mcq(chapter, topic, difficulty="Medium", cognitive_skill="Application"):
"""
Generate a physics MCQ with customizable cognitive skill level
Cognitive Skill Levels:
- 'Recall': Basic fact recall and definition questions
- 'Application': Applying concepts to solve problems
- 'Analysis': Analyzing situations and relationships
- 'Evaluation': Complex reasoning and critical evaluation
"""
prompt = f"""### Instruction:
Generate a multiple-choice question (MCQ) for a university entrance exam in Physics.
### Input:
Subject: Physics | Chapter: {chapter} | Topic: {topic} | Difficulty: {difficulty} | Cognitive_Skill: {cognitive_skill}
### Response:
Question:"""
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=250,
temperature=0.7,
do_sample=True,
pad_token_id=tokenizer.eos_token_id
)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
# Examples with different cognitive skills
print("🧠 Recall Question (Basic knowledge):")
mcq = generate_physics_mcq("Mechanics", "Newton's Laws", "Easy", "Recall")
print(mcq)
print("\n⚡ Application Question (Problem solving):")
mcq = generate_physics_mcq("Electromagnetism", "Ohm's Law", "Medium", "Application")
print(mcq)
print("\n🔍 Analysis Question (Complex reasoning):")
mcq = generate_physics_mcq("Thermodynamics", "First Law", "Hard", "Analysis")
print(mcq)
print("\n🎯 Evaluation Question (Critical thinking):")
mcq = generate_physics_mcq("Modern Physics", "Quantum Mechanics", "Hard", "Evaluation")
print(mcq) |