YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
MCQ_Generator_LLAMA3.2
This model is a fine-tuned version of unsloth/Llama-3.2-3B-Instruct trained on the SciQ dataset for multiple-choice question generation.
Model Details
- Base Model: unsloth/Llama-3.2-3B-Instruct
- Training Data: SciQ dataset
- Task: Multiple Choice Question Generation
- Training Framework: Unsloth with LoRA fine-tuning
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("kenzykhaled/MCQ_Generator_LLAMA3.2")
model = AutoModelForCausalLM.from_pretrained("kenzykhaled/MCQ_Generator_LLAMA3.2")
Training Details
- LoRA rank: 16
- Training steps: 60
- Learning rate: 2e-4
- Max sequence length: 2048
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support