File size: 2,310 Bytes
f537151 1b99e87 f537151 1b99e87 f537151 1b99e87 f537151 2d957ac f537151 1b99e87 f537151 2d957ac f537151 1b99e87 f537151 2d957ac f537151 1b99e87 f537151 2d957ac f537151 1b99e87 f537151 1b99e87 2d957ac f537151 1b99e87 f537151 1b99e87 f537151 1b99e87 f537151 1b99e87 f537151 2d957ac f537151 1b99e87 f537151 2d957ac f537151 1b99e87 f537151 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
---
title: Math-MCQ-Generator-v1
colorFrom: blue
colorTo: purple
sdk: gradio
sdk_version: 4.0.0
app_file: app.py
pinned: false
license: mit
tags:
- text-generation
- mathematics
- mcq-generation
- education
- fine-tuned
- deepseek-math
---
# Math-MCQ-Generator-v1
## Model Description
This is a fine-tuned version of `deepseek-ai/deepseek-math-7b-instruct` specialized for generating high-quality mathematics multiple choice questions (MCQs). The model has been trained using QLoRA (Quantized Low-Rank Adaptation) to efficiently adapt the base model for educational content generation.
## Capabilities
- **Subject**: Mathematics
- **Question Types**: Multiple Choice Questions (MCQs)
- **Topics**: Applications of Trigonometry, Conic Sections, and more
- **Difficulty Levels**: Easy, Medium, Hard
- **Cognitive Skills**: Recall, Direct Application, Pattern Recognition, Strategic Reasoning, Trap Aware
## Training Information
- **Base Model**: `deepseek-ai/deepseek-math-7b-instruct`
- **Training Method**: QLoRA (4-bit quantization)
- **Dataset Size**: 1519 examples
- **Training Epochs**: 5
- **Final Loss**: ~0.20
- **Training Date**: 2025-09-03
## Usage
### Via Python API
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
# Load model
base_model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-math-7b-instruct")
model = PeftModel.from_pretrained(base_model, "danxh/math-mcq-generator-v1")
tokenizer = AutoTokenizer.from_pretrained("danxh/math-mcq-generator-v1")
# Generate MCQ
prompt = '''### Instruction:
Generate a math MCQ similar in style to the provided examples.
### Input:
chapter: Applications of Trigonometry
topics: ['Heights and Distances']
Difficulty: medium
Cognitive Skill: direct_application
### Response:
'''
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=300, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
```
## Performance
The model demonstrates strong performance in generating contextually appropriate mathematics MCQs with:
- Proper question formatting
- Relevant multiple choice options
- Appropriate difficulty scaling
- Subject-matter accuracy
## License
MIT License - Feel free to use, modify, and distribute.
|