YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
MCQ Generation Model
This model is fine-tuned on the RACE dataset for generating multiple-choice questions. It is based on Mistral-Nemo-Base-2407 and uses unsloth optimizations.
Model Details
- Base Model: unsloth/Mistral-Nemo-Base-2407
- Task: Multiple Choice Question Generation
- Training Data: RACE dataset
- Optimization: unsloth LoRA fine-tuning
Usage
from transformers import AutoTokenizer
from peft import AutoPeftModelForCausalLM
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("kenzykhaled/Question_generator_Mistral")
# Load model
model = AutoPeftModelForCausalLM.from_pretrained(
"kenzykhaled/Question_generator_Mistral",
device_map="auto",
load_in_4bit=True
)
# Prepare your input
text = """
Generate a multiple-choice question (MCQ) based on the passage, provide options, and indicate the correct option.
Passage: [Your passage here]
"""
# Generate MCQ
inputs = tokenizer(text, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=128)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
Training Details
- LoRA rank: 16
- Target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
- Training dataset: RACE (all)
- Training framework: unsloth + transformers
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support