File size: 1,304 Bytes
ab2b0f9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46

# MCQ Generation Model

This model is fine-tuned on the RACE dataset for generating multiple-choice questions. It is based on Mistral-Nemo-Base-2407 and uses unsloth optimizations.

## Model Details
- Base Model: unsloth/Mistral-Nemo-Base-2407
- Task: Multiple Choice Question Generation
- Training Data: RACE dataset
- Optimization: unsloth LoRA fine-tuning

## Usage
```python
from transformers import AutoTokenizer
from peft import AutoPeftModelForCausalLM

# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("kenzykhaled/Question_generator_Mistral")

# Load model
model = AutoPeftModelForCausalLM.from_pretrained(
    "kenzykhaled/Question_generator_Mistral",
    device_map="auto",
    load_in_4bit=True
)

# Prepare your input
text = """
Generate a multiple-choice question (MCQ) based on the passage, provide options, and indicate the correct option.

Passage: [Your passage here]
"""

# Generate MCQ
inputs = tokenizer(text, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=128)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```

## Training Details
- LoRA rank: 16
- Target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
- Training dataset: RACE (all)
- Training framework: unsloth + transformers