chore: update model card
Browse files
README.md
CHANGED
|
@@ -1,199 +1,146 @@
|
|
| 1 |
---
|
| 2 |
library_name: transformers
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
---
|
| 5 |
|
| 6 |
-
# Model Card for Model ID
|
| 7 |
|
| 8 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
| 9 |
|
| 10 |
|
|
|
|
|
|
|
|
|
|
| 11 |
|
| 12 |
## Model Details
|
| 13 |
|
| 14 |
### Model Description
|
| 15 |
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
| 19 |
-
|
| 20 |
-
- **Developed by:** [More Information Needed]
|
| 21 |
-
- **Funded by [optional]:** [More Information Needed]
|
| 22 |
-
- **Shared by [optional]:** [More Information Needed]
|
| 23 |
-
- **Model type:** [More Information Needed]
|
| 24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
| 25 |
-
- **License:** [More Information Needed]
|
| 26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
| 27 |
|
| 28 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
|
| 30 |
-
|
| 31 |
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
- **Demo [optional]:** [More Information Needed]
|
| 35 |
|
| 36 |
## Uses
|
| 37 |
|
| 38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
| 39 |
-
|
| 40 |
### Direct Use
|
| 41 |
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
[More Information Needed]
|
| 45 |
-
|
| 46 |
-
### Downstream Use [optional]
|
| 47 |
|
| 48 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
| 49 |
-
|
| 50 |
-
[More Information Needed]
|
| 51 |
|
| 52 |
### Out-of-Scope Use
|
| 53 |
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
|
| 58 |
## Bias, Risks, and Limitations
|
| 59 |
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
|
| 64 |
### Recommendations
|
| 65 |
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
|
| 69 |
|
| 70 |
## How to Get Started with the Model
|
| 71 |
|
| 72 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 73 |
|
| 74 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 75 |
|
| 76 |
## Training Details
|
| 77 |
|
| 78 |
### Training Data
|
| 79 |
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
[More Information Needed]
|
| 83 |
|
| 84 |
### Training Procedure
|
| 85 |
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
#### Preprocessing [optional]
|
| 89 |
-
|
| 90 |
-
[More Information Needed]
|
| 91 |
|
|
|
|
|
|
|
| 92 |
|
| 93 |
#### Training Hyperparameters
|
| 94 |
|
| 95 |
-
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
| 100 |
-
|
| 101 |
-
[More Information Needed]
|
| 102 |
|
| 103 |
## Evaluation
|
| 104 |
|
| 105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
| 106 |
-
|
| 107 |
### Testing Data, Factors & Metrics
|
| 108 |
|
| 109 |
#### Testing Data
|
| 110 |
|
| 111 |
-
|
| 112 |
-
|
| 113 |
-
[More Information Needed]
|
| 114 |
|
| 115 |
#### Factors
|
| 116 |
|
| 117 |
-
|
| 118 |
-
|
| 119 |
-
[More Information Needed]
|
| 120 |
|
| 121 |
#### Metrics
|
| 122 |
|
| 123 |
-
|
| 124 |
-
|
| 125 |
-
[More Information Needed]
|
| 126 |
|
| 127 |
### Results
|
| 128 |
|
| 129 |
-
|
| 130 |
|
| 131 |
#### Summary
|
| 132 |
|
|
|
|
|
|
|
|
|
|
| 133 |
|
| 134 |
|
| 135 |
-
## Model Examination [optional]
|
| 136 |
|
| 137 |
-
|
| 138 |
-
|
| 139 |
-
[More Information Needed]
|
| 140 |
-
|
| 141 |
-
## Environmental Impact
|
| 142 |
-
|
| 143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
| 144 |
-
|
| 145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
| 146 |
-
|
| 147 |
-
- **Hardware Type:** [More Information Needed]
|
| 148 |
-
- **Hours used:** [More Information Needed]
|
| 149 |
-
- **Cloud Provider:** [More Information Needed]
|
| 150 |
-
- **Compute Region:** [More Information Needed]
|
| 151 |
-
- **Carbon Emitted:** [More Information Needed]
|
| 152 |
-
|
| 153 |
-
## Technical Specifications [optional]
|
| 154 |
|
| 155 |
### Model Architecture and Objective
|
| 156 |
|
| 157 |
-
|
| 158 |
|
| 159 |
-
### Compute Infrastructure
|
| 160 |
-
|
| 161 |
-
[More Information Needed]
|
| 162 |
-
|
| 163 |
-
#### Hardware
|
| 164 |
-
|
| 165 |
-
[More Information Needed]
|
| 166 |
|
| 167 |
#### Software
|
| 168 |
|
| 169 |
-
|
| 170 |
-
|
| 171 |
-
## Citation [optional]
|
| 172 |
-
|
| 173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
| 174 |
-
|
| 175 |
-
**BibTeX:**
|
| 176 |
-
|
| 177 |
-
[More Information Needed]
|
| 178 |
-
|
| 179 |
-
**APA:**
|
| 180 |
-
|
| 181 |
-
[More Information Needed]
|
| 182 |
-
|
| 183 |
-
## Glossary [optional]
|
| 184 |
-
|
| 185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
| 186 |
-
|
| 187 |
-
[More Information Needed]
|
| 188 |
-
|
| 189 |
-
## More Information [optional]
|
| 190 |
-
|
| 191 |
-
[More Information Needed]
|
| 192 |
-
|
| 193 |
-
## Model Card Authors [optional]
|
| 194 |
|
| 195 |
-
|
| 196 |
|
| 197 |
-
|
|
|
|
|
|
|
|
|
|
| 198 |
|
| 199 |
-
[More Information Needed]
|
|
|
|
| 1 |
---
|
| 2 |
library_name: transformers
|
| 3 |
+
license: mit
|
| 4 |
+
language:
|
| 5 |
+
- en
|
| 6 |
+
base_model:
|
| 7 |
+
- microsoft/phi-2
|
| 8 |
---
|
| 9 |
|
|
|
|
| 10 |
|
|
|
|
| 11 |
|
| 12 |
|
| 13 |
+
# Model Card for ShAIkespear/Phi-2_DPO_M3_Quantized_Alt_8bit
|
| 14 |
+
|
| 15 |
+
A quantized (8-bit) LoRA-finetuned variant of **microsoft/phi-2** targeting STEM multiple-choice question answering (MCQA). The model was first trained with SFT on mixed STEM MCQA datasets, then aligned via DPO using human preference data (EPFL exam MCQAs). Finally, it was quantized to 8-bit to reduce memory and improve inference speed.
|
| 16 |
|
| 17 |
## Model Details
|
| 18 |
|
| 19 |
### Model Description
|
| 20 |
|
| 21 |
+
This model adapts Phi-2 (2.78B params, 2,048 ctx) for MCQA, especially STEM. Training used LoRA adapters (rank=16, α=16, dropout=0.05) and the TRL library for SFT and DPO; checkpoints focus on adapter weights for compactness. An 8-bit quantized deployment configuration (BitsAndBytes) is provided.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
|
| 23 |
+
* **Developed by:** ShAIkespear team
|
| 24 |
+
* **Shared by:** ShAIkespear team
|
| 25 |
+
* **Model type:** Causal decoder-only LM (Phi-2) with LoRA adapters; DPO-aligned MCQA assistant
|
| 26 |
+
* **Language(s) (NLP):** English (training/eval datasets are primarily EN)
|
| 27 |
+
* **License:** MIT (per repository)
|
| 28 |
+
* **Finetuned from model:** microsoft/phi-2
|
| 29 |
|
| 30 |
+
### Model Sources
|
| 31 |
|
| 32 |
+
* **Repository:** [2.8B-Phi-2-LLM-QA](https://github.com/EricSaikali/2.8B-Phi-2-LLM-QA)
|
| 33 |
+
* **Report:** “ShAIkerspear - How to replace TAs: A comprehensive study on letting LLMs answer your questions”
|
|
|
|
| 34 |
|
| 35 |
## Uses
|
| 36 |
|
|
|
|
|
|
|
| 37 |
### Direct Use
|
| 38 |
|
| 39 |
+
* MCQA answering for STEM and general knowledge benchmarks (e.g., MMLU, OpenBookQA).
|
| 40 |
+
* Educational assistants/tutors for multiple-choice reasoning with short chain-of-thought style explanations in prompts.
|
|
|
|
|
|
|
|
|
|
| 41 |
|
|
|
|
|
|
|
|
|
|
| 42 |
|
| 43 |
### Out-of-Scope Use
|
| 44 |
|
| 45 |
+
* High-stakes domains (medical, legal, safety-critical) without human oversight.
|
| 46 |
+
* Generative tasks outside MCQA chat format may underperform (e.g., long-form reasoning proofs).
|
| 47 |
+
* Any use that violates exam integrity or leaks copyrighted/confidential test content (see ethics notes).
|
| 48 |
|
| 49 |
## Bias, Risks, and Limitations
|
| 50 |
|
| 51 |
+
* **STEM difficulty:** Performance on math/science MCQA hovers near random in several sets (~0.25), indicating limited reliability for harder STEM reasoning.
|
| 52 |
+
* **Alignment drift:** DPO after SFT can affect letter-only answer formatting. Models sometimes generate extra content or follow-up questions.
|
| 53 |
+
* **Data risk:** EPFL exam-derived prompts/answers may raise confidentiality and fairness issues if reused exams are included.
|
| 54 |
|
| 55 |
### Recommendations
|
| 56 |
|
| 57 |
+
* Keep a human in the loop for grading/teaching.
|
| 58 |
+
* Prefer balanced MCQA data; include explicit “Question / Explanation / Answer” formatting to stabilize outputs.
|
| 59 |
+
* Apply filtering/guardrails to block harmful or exam-integrity-breaking prompts.
|
| 60 |
|
| 61 |
## How to Get Started with the Model
|
| 62 |
|
| 63 |
+
```python
|
| 64 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
|
| 65 |
+
import torch
|
| 66 |
+
|
| 67 |
+
model_id = "ShAIkespear/Phi-2_DPO_M3_Quantized_Alt_8bit" # replace with your Hub ID
|
| 68 |
|
| 69 |
+
bnb_config = BitsAndBytesConfig(load_in_8bit=True)
|
| 70 |
+
tok = AutoTokenizer.from_pretrained(model_id, use_fast=True)
|
| 71 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 72 |
+
model_id, device_map="auto", quantization_config=bnb_config
|
| 73 |
+
)
|
| 74 |
+
|
| 75 |
+
prompt = "### Question: What is 2+2?\n### Explanation: Add the integers.\n### Answer:"
|
| 76 |
+
inputs = tok(prompt, return_tensors="pt").to(model.device)
|
| 77 |
+
out = model.generate(**inputs, max_new_tokens=10)
|
| 78 |
+
print(tok.decode(out[0], skip_special_tokens=True))
|
| 79 |
+
```
|
| 80 |
|
| 81 |
## Training Details
|
| 82 |
|
| 83 |
### Training Data
|
| 84 |
|
| 85 |
+
Mixed SFT on: MathQA, OpenBookQA, ScienceQA, TAL-SCQ5K, plus balanced/shuffled merged MCQA sets; DPO on HelpSteer and a student-curated EPFL preference dataset (~20–30k pairs; subsets for SFT/DPO). Long items (>512 tokens) dropped; large datasets clipped to 20k samples. Splits: train 50%, test_overfit 25%, test_comparison 10%, test_quantization 15%.
|
|
|
|
|
|
|
| 86 |
|
| 87 |
### Training Procedure
|
| 88 |
|
| 89 |
+
#### Preprocessing
|
|
|
|
|
|
|
|
|
|
|
|
|
| 90 |
|
| 91 |
+
Unified MCQA schema. SFT format: id, subject, question, answer/answer_text, choices. DPO format: prompt, rejected, chosen. Prompts used a structured header:
|
| 92 |
+
`### Question ... ### Explanation ... ### Answer`
|
| 93 |
|
| 94 |
#### Training Hyperparameters
|
| 95 |
|
| 96 |
+
* **Regime:** Mixed precision typical for TRL (not explicitly specified); LoRA rank 16, α 16, dropout 0.05.
|
| 97 |
+
* **Batch sizes:** SFT train/eval = 4; DPO = 1 (OOM otherwise).
|
| 98 |
+
* **LR:** 1e-5 for public datasets; 1e-4 for EPFL data; cosine schedule with warmup.
|
| 99 |
+
* **Frameworks:** Hugging Face TRL + PEFT LoRA.
|
|
|
|
|
|
|
|
|
|
| 100 |
|
| 101 |
## Evaluation
|
| 102 |
|
|
|
|
|
|
|
| 103 |
### Testing Data, Factors & Metrics
|
| 104 |
|
| 105 |
#### Testing Data
|
| 106 |
|
| 107 |
+
Per-dataset held-out test sets (see splits), plus MMLU formatted to the SFT schema.
|
|
|
|
|
|
|
| 108 |
|
| 109 |
#### Factors
|
| 110 |
|
| 111 |
+
Task domain (math vs. general science vs. open-domain), data balancing, order of SFT/DPO phases.
|
|
|
|
|
|
|
| 112 |
|
| 113 |
#### Metrics
|
| 114 |
|
| 115 |
+
Accuracy for MCQA; DPO choice accuracy for preference pairs.
|
|
|
|
|
|
|
| 116 |
|
| 117 |
### Results
|
| 118 |
|
| 119 |
+
Among several training recipes, the **balanced-then-DPO** configuration (model 8) performed best overall.
|
| 120 |
|
| 121 |
#### Summary
|
| 122 |
|
| 123 |
+
* Balanced MCQA SFT improved robustness.
|
| 124 |
+
* DPO on EPFL preferences improved alignment and EPFL-like accuracy.
|
| 125 |
+
* 8-bit quantization shrank memory (~11GB → ~3GB noted in report’s table) with mixed accuracy effects across tasks.
|
| 126 |
|
| 127 |
|
|
|
|
| 128 |
|
| 129 |
+
## Technical Specifications
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 130 |
|
| 131 |
### Model Architecture and Objective
|
| 132 |
|
| 133 |
+
Phi-2 transformer decoder LM (2.78B params) with next-token prediction objective; LoRA adapters for finetuning; DPO for preference alignment; 8-bit quantized runtime.
|
| 134 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 135 |
|
| 136 |
#### Software
|
| 137 |
|
| 138 |
+
Hugging Face TRL, PEFT/LoRA, Transformers; BitsAndBytes for quantization.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 139 |
|
| 140 |
+
## Glossary
|
| 141 |
|
| 142 |
+
* **MCQA:** Multiple-choice question answering.
|
| 143 |
+
* **SFT:** Supervised finetuning with gold answers.
|
| 144 |
+
* **DPO:** Direct Preference Optimization (pairwise preference alignment).
|
| 145 |
+
* **LoRA:** Low-Rank Adaptation for parameter-efficient finetuning.
|
| 146 |
|
|
|