YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
MNLP_M3_mcqa_model_optimized
This model is a fine-tuned version of Qwen/Qwen3-0.6B-Base for Multiple Choice Question Answering (MCQA).
Training Details
- Base model: Qwen/Qwen3-0.6B-Base
- Task: Multiple Choice Question Answering
- Training dataset: aymanbakiri/MNLP_M2_mcqa_dataset
- Training approach: Multi-stage training with curriculum learning
- LoRA config: r=128, alpha=256
- Training stages:
- Stage 1: 4 epochs, LR=8e-5
- Stage 2: 6 epochs, LR=3e-5
- Stage 3: 2 epochs, LR=1e-5
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
model = AutoModelForCausalLM.from_pretrained("MNLP_M3_mcqa_model_optimized")
tokenizer = AutoTokenizer.from_pretrained("MNLP_M3_mcqa_model_optimized")
# Example inference
question = "What is the capital of France?"
choices = ["London", "Berlin", "Paris", "Madrid"]
prompt = f"Question: {question}\nA) {choices[0]}\nB) {choices[1]}\nC) {choices[2]}\nD) {choices[3]}\nAnswer:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=5, temperature=0.1)
answer = tokenizer.decode(outputs[0][inputs['input_ids'].shape[1]:], skip_special_tokens=True).strip()
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support