LLaMA 3.2 1B - Distractors Generation Model
This model is designed to generate distractors (incorrect but plausible answer choices) for multiple-choice questions. It is particularly useful in educational applications where high-quality distractors are needed.
Model Details
- Model Name:
llama3.2_1B_distractors_generation - Architecture: LLaMA 3.2 1B
- Developer: BirendraSharma
- Use Case: MCQ distractor generation
- License: [More Information Needed]
How to Use the Model
You can load the model and tokenizer using transformers and generate distractors using the following example:
Installation
Ensure you have transformers and torch installed:
pip install transformers torch
Loading the Model & Tokenizer
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
import torch
# Load model and tokenizer
model_path = "BirendraSharma/llama3.2_1B_distractors_generation"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16, device_map="auto")
def generate_distractors(context, question, answer, instruction, model, tokenizer, max_seq_length=1024):
"""Generates distractors for a given question-answer pair."""
model.eval()
prompt = f"""{instruction}
Context: '{context}'
Question: '{question}'
Answer: '{answer}'.
Provide only three distractors as a comma-separated list. Do not include explanations, commentary, or additional text."""
inputs = tokenizer(prompt, return_tensors="pt", padding=True, truncation=True, max_length=max_seq_length).to("cuda")
generation_config = GenerationConfig(
max_new_tokens=128,
use_cache=True,
temperature=0.7,
)
outputs = model.generate(**inputs, generation_config=generation_config)
distractors_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0][len(prompt):].strip()
distractors = [d.strip() for d in distractors_text.split(",") if d.strip()]
return ", ".join(distractors[:3])
# Example usage
context = "In physics, the concept of jerk is used to describe the rate of change of acceleration."
question = "What is the physical quantity that jerk is the rate of change of?"
answer = "Acceleration"
instruction = "Generate plausible but incorrect answer choices."
distractors = generate_distractors(context, question, answer, instruction, model, tokenizer)
print("Distractors:", distractors)
# Output: velocity, momentum, force
Dataset Used for Training
[More Information Needed]
Training Details
- Hardware Used: T4 GPU (Google Colab)
- Fine-tuned from: LLaMA 3.2 1B
- Training Framework:
transformers+SFTTrainer
Evaluation Metrics
The model was evaluated using BLEU and ROUGE scores to compare generated distractors with reference distractors.
Citation
If you use this model, please cite:
@article{llama3.2_distractor_gen,
title={Distractor Generation using LLaMA 3.2},
author={Birendra Sharma},
year={2025},
publisher={Hugging Face Model Hub}
}
For more details, visit the Hugging Face model page.
- Downloads last month
- 37