File size: 5,666 Bytes
ab5fad9
 
199fdd5
 
 
 
 
ab5fad9
 
 
 
 
199fdd5
 
 
ab5fad9
 
 
 
 
199fdd5
ab5fad9
199fdd5
 
 
 
 
 
ab5fad9
199fdd5
ab5fad9
199fdd5
 
ab5fad9
 
 
 
 
199fdd5
 
ab5fad9
 
 
 
199fdd5
 
 
ab5fad9
 
 
199fdd5
 
 
ab5fad9
 
 
199fdd5
 
 
ab5fad9
 
 
199fdd5
 
 
 
 
ab5fad9
199fdd5
 
 
 
 
 
 
 
 
 
 
ab5fad9
 
 
 
 
199fdd5
ab5fad9
 
 
199fdd5
ab5fad9
199fdd5
 
ab5fad9
 
 
199fdd5
 
 
 
ab5fad9
 
 
 
 
 
 
199fdd5
ab5fad9
 
 
199fdd5
ab5fad9
 
 
199fdd5
ab5fad9
 
 
199fdd5
ab5fad9
 
 
199fdd5
 
 
ab5fad9
 
 
199fdd5
ab5fad9
 
 
199fdd5
ab5fad9
 
 
 
199fdd5
ab5fad9
199fdd5
ab5fad9
199fdd5
 
 
 
ab5fad9
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
---
library_name: transformers
license: mit
language:
- en
base_model:
- microsoft/phi-2
---




# Model Card for ShAIkespear/Phi-2_DPO_M3_Quantized_Alt_8bit

A quantized (8-bit) LoRA-finetuned variant of **microsoft/phi-2** targeting STEM multiple-choice question answering (MCQA). The model was first trained with SFT on mixed STEM MCQA datasets, then aligned via DPO using human preference data (EPFL exam MCQAs). Finally, it was quantized to 8-bit to reduce memory and improve inference speed. 

## Model Details

### Model Description

This model adapts Phi-2 (2.78B params, 2,048 ctx) for MCQA, especially STEM. Training used LoRA adapters (rank=16, α=16, dropout=0.05) and the TRL library for SFT and DPO; checkpoints focus on adapter weights for compactness. An 8-bit quantized deployment configuration (BitsAndBytes) is provided. 

* **Developed by:** ShAIkespear team
* **Shared by:** ShAIkespear team
* **Model type:** Causal decoder-only LM (Phi-2) with LoRA adapters; DPO-aligned MCQA assistant 
* **Language(s) (NLP):** English (training/eval datasets are primarily EN) 
* **License:** MIT (per repository)
* **Finetuned from model:** microsoft/phi-2 

### Model Sources 

* **Repository:** [2.8B-Phi-2-LLM-QA](https://github.com/EricSaikali/2.8B-Phi-2-LLM-QA)
* **Report:** “ShAIkerspear - How to replace TAs: A comprehensive study on letting LLMs answer your questions”

## Uses

### Direct Use

* MCQA answering for STEM and general knowledge benchmarks (e.g., MMLU, OpenBookQA).
* Educational assistants/tutors for multiple-choice reasoning with short chain-of-thought style explanations in prompts. 


### Out-of-Scope Use

* High-stakes domains (medical, legal, safety-critical) without human oversight.
* Generative tasks outside MCQA chat format may underperform (e.g., long-form reasoning proofs).
* Any use that violates exam integrity or leaks copyrighted/confidential test content (see ethics notes). 

## Bias, Risks, and Limitations

* **STEM difficulty:** Performance on math/science MCQA hovers near random in several sets (~0.25), indicating limited reliability for harder STEM reasoning. 
* **Alignment drift:** DPO after SFT can affect letter-only answer formatting. Models sometimes generate extra content or follow-up questions. 
* **Data risk:** EPFL exam-derived prompts/answers may raise confidentiality and fairness issues if reused exams are included. 

### Recommendations

* Keep a human in the loop for grading/teaching.
* Prefer balanced MCQA data; include explicit “Question / Explanation / Answer” formatting to stabilize outputs. 
* Apply filtering/guardrails to block harmful or exam-integrity-breaking prompts. 

## How to Get Started with the Model

```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
import torch

model_id = "ShAIkespear/Phi-2_DPO_M3_Quantized_Alt_8bit"  # replace with your Hub ID

bnb_config = BitsAndBytesConfig(load_in_8bit=True)
tok = AutoTokenizer.from_pretrained(model_id, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
    model_id, device_map="auto", quantization_config=bnb_config
)

prompt = "### Question: What is 2+2?\n### Explanation: Add the integers.\n### Answer:"
inputs = tok(prompt, return_tensors="pt").to(model.device)
out = model.generate(**inputs, max_new_tokens=10)
print(tok.decode(out[0], skip_special_tokens=True))
```

## Training Details

### Training Data

Mixed SFT on: MathQA, OpenBookQA, ScienceQA, TAL-SCQ5K, plus balanced/shuffled merged MCQA sets; DPO on HelpSteer and a student-curated EPFL preference dataset (~20–30k pairs; subsets for SFT/DPO). Long items (>512 tokens) dropped; large datasets clipped to 20k samples. Splits: train 50%, test_overfit 25%, test_comparison 10%, test_quantization 15%. 

### Training Procedure

#### Preprocessing

Unified MCQA schema. SFT format: id, subject, question, answer/answer_text, choices. DPO format: prompt, rejected, chosen. Prompts used a structured header:
`### Question ... ### Explanation ... ### Answer` 

#### Training Hyperparameters

* **Regime:** Mixed precision typical for TRL (not explicitly specified); LoRA rank 16, α 16, dropout 0.05.
* **Batch sizes:** SFT train/eval = 4; DPO = 1 (OOM otherwise).
* **LR:** 1e-5 for public datasets; 1e-4 for EPFL data; cosine schedule with warmup.
* **Frameworks:** Hugging Face TRL + PEFT LoRA. 

## Evaluation

### Testing Data, Factors & Metrics

#### Testing Data

Per-dataset held-out test sets (see splits), plus MMLU formatted to the SFT schema. 

#### Factors

Task domain (math vs. general science vs. open-domain), data balancing, order of SFT/DPO phases. 

#### Metrics

Accuracy for MCQA; DPO choice accuracy for preference pairs. 

### Results

Among several training recipes, the **balanced-then-DPO** configuration (model 8) performed best overall.

#### Summary

* Balanced MCQA SFT improved robustness.
* DPO on EPFL preferences improved alignment and EPFL-like accuracy.
* 8-bit quantization shrank memory (~11GB → ~3GB noted in report’s table) with mixed accuracy effects across tasks. 



## Technical Specifications

### Model Architecture and Objective

Phi-2 transformer decoder LM (2.78B params) with next-token prediction objective; LoRA adapters for finetuning; DPO for preference alignment; 8-bit quantized runtime. 


#### Software

Hugging Face TRL, PEFT/LoRA, Transformers; BitsAndBytes for quantization. 

## Glossary

* **MCQA:** Multiple-choice question answering.
* **SFT:** Supervised finetuning with gold answers.
* **DPO:** Direct Preference Optimization (pairwise preference alignment).
* **LoRA:** Low-Rank Adaptation for parameter-efficient finetuning.