Update README.md
Browse files
README.md
CHANGED
|
@@ -1,6 +1,5 @@
|
|
| 1 |
---
|
| 2 |
title: Math-MCQ-Generator-v1
|
| 3 |
-
emoji: 🧮
|
| 4 |
colorFrom: blue
|
| 5 |
colorTo: purple
|
| 6 |
sdk: gradio
|
|
@@ -19,11 +18,11 @@ tags:
|
|
| 19 |
|
| 20 |
# Math-MCQ-Generator-v1
|
| 21 |
|
| 22 |
-
##
|
| 23 |
|
| 24 |
This is a fine-tuned version of `deepseek-ai/deepseek-math-7b-instruct` specialized for generating high-quality mathematics multiple choice questions (MCQs). The model has been trained using QLoRA (Quantized Low-Rank Adaptation) to efficiently adapt the base model for educational content generation.
|
| 25 |
|
| 26 |
-
##
|
| 27 |
|
| 28 |
- **Subject**: Mathematics
|
| 29 |
- **Question Types**: Multiple Choice Questions (MCQs)
|
|
@@ -31,7 +30,7 @@ This is a fine-tuned version of `deepseek-ai/deepseek-math-7b-instruct` speciali
|
|
| 31 |
- **Difficulty Levels**: Easy, Medium, Hard
|
| 32 |
- **Cognitive Skills**: Recall, Direct Application, Pattern Recognition, Strategic Reasoning, Trap Aware
|
| 33 |
|
| 34 |
-
##
|
| 35 |
|
| 36 |
- **Base Model**: `deepseek-ai/deepseek-math-7b-instruct`
|
| 37 |
- **Training Method**: QLoRA (4-bit quantization)
|
|
@@ -40,10 +39,7 @@ This is a fine-tuned version of `deepseek-ai/deepseek-math-7b-instruct` speciali
|
|
| 40 |
- **Final Loss**: ~0.20
|
| 41 |
- **Training Date**: 2025-09-03
|
| 42 |
|
| 43 |
-
##
|
| 44 |
-
|
| 45 |
-
### Via Gradio Interface (Recommended)
|
| 46 |
-
Visit the Spaces page to interact with the model through a user-friendly interface.
|
| 47 |
|
| 48 |
### Via Python API
|
| 49 |
```python
|
|
@@ -52,8 +48,8 @@ from peft import PeftModel
|
|
| 52 |
|
| 53 |
# Load model
|
| 54 |
base_model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-math-7b-instruct")
|
| 55 |
-
model = PeftModel.from_pretrained(base_model, "
|
| 56 |
-
tokenizer = AutoTokenizer.from_pretrained("
|
| 57 |
|
| 58 |
# Generate MCQ
|
| 59 |
prompt = '''### Instruction:
|
|
@@ -73,7 +69,7 @@ outputs = model.generate(**inputs, max_new_tokens=300, temperature=0.7)
|
|
| 73 |
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
| 74 |
```
|
| 75 |
|
| 76 |
-
##
|
| 77 |
|
| 78 |
The model demonstrates strong performance in generating contextually appropriate mathematics MCQs with:
|
| 79 |
- Proper question formatting
|
|
@@ -81,16 +77,7 @@ The model demonstrates strong performance in generating contextually appropriate
|
|
| 81 |
- Appropriate difficulty scaling
|
| 82 |
- Subject-matter accuracy
|
| 83 |
|
| 84 |
-
##
|
| 85 |
-
|
| 86 |
-
This model is part of a collaborative effort to create specialized educational AI tools. A companion Physics MCQ generator is also available.
|
| 87 |
-
|
| 88 |
-
## 📄 License
|
| 89 |
|
| 90 |
MIT License - Feel free to use, modify, and distribute.
|
| 91 |
|
| 92 |
-
## 🙏 Acknowledgments
|
| 93 |
-
|
| 94 |
-
- Base model by DeepSeek AI
|
| 95 |
-
- Training infrastructure supported by Hugging Face
|
| 96 |
-
- Educational content expertise from domain specialists
|
|
|
|
| 1 |
---
|
| 2 |
title: Math-MCQ-Generator-v1
|
|
|
|
| 3 |
colorFrom: blue
|
| 4 |
colorTo: purple
|
| 5 |
sdk: gradio
|
|
|
|
| 18 |
|
| 19 |
# Math-MCQ-Generator-v1
|
| 20 |
|
| 21 |
+
## Model Description
|
| 22 |
|
| 23 |
This is a fine-tuned version of `deepseek-ai/deepseek-math-7b-instruct` specialized for generating high-quality mathematics multiple choice questions (MCQs). The model has been trained using QLoRA (Quantized Low-Rank Adaptation) to efficiently adapt the base model for educational content generation.
|
| 24 |
|
| 25 |
+
## Capabilities
|
| 26 |
|
| 27 |
- **Subject**: Mathematics
|
| 28 |
- **Question Types**: Multiple Choice Questions (MCQs)
|
|
|
|
| 30 |
- **Difficulty Levels**: Easy, Medium, Hard
|
| 31 |
- **Cognitive Skills**: Recall, Direct Application, Pattern Recognition, Strategic Reasoning, Trap Aware
|
| 32 |
|
| 33 |
+
## Training Information
|
| 34 |
|
| 35 |
- **Base Model**: `deepseek-ai/deepseek-math-7b-instruct`
|
| 36 |
- **Training Method**: QLoRA (4-bit quantization)
|
|
|
|
| 39 |
- **Final Loss**: ~0.20
|
| 40 |
- **Training Date**: 2025-09-03
|
| 41 |
|
| 42 |
+
## Usage
|
|
|
|
|
|
|
|
|
|
| 43 |
|
| 44 |
### Via Python API
|
| 45 |
```python
|
|
|
|
| 48 |
|
| 49 |
# Load model
|
| 50 |
base_model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-math-7b-instruct")
|
| 51 |
+
model = PeftModel.from_pretrained(base_model, "danxh/math-mcq-generator-v1")
|
| 52 |
+
tokenizer = AutoTokenizer.from_pretrained("danxh/math-mcq-generator-v1")
|
| 53 |
|
| 54 |
# Generate MCQ
|
| 55 |
prompt = '''### Instruction:
|
|
|
|
| 69 |
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
| 70 |
```
|
| 71 |
|
| 72 |
+
## Performance
|
| 73 |
|
| 74 |
The model demonstrates strong performance in generating contextually appropriate mathematics MCQs with:
|
| 75 |
- Proper question formatting
|
|
|
|
| 77 |
- Appropriate difficulty scaling
|
| 78 |
- Subject-matter accuracy
|
| 79 |
|
| 80 |
+
## License
|
|
|
|
|
|
|
|
|
|
|
|
|
| 81 |
|
| 82 |
MIT License - Feel free to use, modify, and distribute.
|
| 83 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|