File size: 709 Bytes
7dc4be8 84b2c75 7dc4be8 84b2c75 7dc4be8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
# MCQ_Generator_LLAMA3.2
This model is a fine-tuned version of unsloth/Llama-3.2-3B-Instruct trained on the SciQ dataset for multiple-choice question generation.
## Model Details
- Base Model: unsloth/Llama-3.2-3B-Instruct
- Training Data: SciQ dataset
- Task: Multiple Choice Question Generation
- Training Framework: Unsloth with LoRA fine-tuning
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("kenzykhaled/MCQ_Generator_LLAMA3.2")
model = AutoModelForCausalLM.from_pretrained("kenzykhaled/MCQ_Generator_LLAMA3.2")
```
## Training Details
- LoRA rank: 16
- Training steps: 60
- Learning rate: 2e-4
- Max sequence length: 2048
|