| # MCQ_Generator_LLAMA3.2 | |
| This model is a fine-tuned version of unsloth/Llama-3.2-3B-Instruct trained on the SciQ dataset for multiple-choice question generation. | |
| ## Model Details | |
| - Base Model: unsloth/Llama-3.2-3B-Instruct | |
| - Training Data: SciQ dataset | |
| - Task: Multiple Choice Question Generation | |
| - Training Framework: Unsloth with LoRA fine-tuning | |
| ## Usage | |
| ```python | |
| from transformers import AutoTokenizer, AutoModelForCausalLM | |
| tokenizer = AutoTokenizer.from_pretrained("kenzykhaled/MCQ_Generator_LLAMA3.2") | |
| model = AutoModelForCausalLM.from_pretrained("kenzykhaled/MCQ_Generator_LLAMA3.2") | |
| ``` | |
| ## Training Details | |
| - LoRA rank: 16 | |
| - Training steps: 60 | |
| - Learning rate: 2e-4 | |
| - Max sequence length: 2048 | |