DeryFerd commited on
Commit
5175ed3
·
verified ·
1 Parent(s): 4d770c4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +90 -152
README.md CHANGED
@@ -1,199 +1,137 @@
1
  ---
2
  library_name: transformers
3
- tags: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ---
5
 
6
- # Model Card for Model ID
7
-
8
- <!-- Provide a quick summary of what the model is/does. -->
9
 
 
10
 
 
11
 
12
  ## Model Details
13
 
14
  ### Model Description
15
 
16
- <!-- Provide a longer summary of what this model is. -->
17
-
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
-
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
-
28
- ### Model Sources [optional]
29
-
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
-
36
- ## Uses
37
-
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
 
40
- ### Direct Use
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
 
 
 
 
 
43
 
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
 
56
- [More Information Needed]
57
 
58
- ## Bias, Risks, and Limitations
 
 
 
59
 
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
 
 
61
 
62
- [More Information Needed]
 
 
 
 
 
 
63
 
64
- ### Recommendations
 
65
 
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
 
 
67
 
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
 
 
69
 
70
- ## How to Get Started with the Model
71
 
72
- Use the code below to get started with the model.
 
 
73
 
74
- [More Information Needed]
 
 
75
 
76
  ## Training Details
77
 
78
  ### Training Data
79
 
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
 
 
81
 
82
- [More Information Needed]
83
 
84
  ### Training Procedure
85
 
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
 
 
 
92
 
93
  #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
 
 
102
 
103
  ## Evaluation
104
 
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
 
125
- [More Information Needed]
 
 
126
 
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
 
189
- ## More Information [optional]
 
 
 
190
 
191
- [More Information Needed]
192
 
193
- ## Model Card Authors [optional]
194
 
195
- [More Information Needed]
196
 
197
- ## Model Card Contact
198
 
199
- [More Information Needed]
 
1
  ---
2
  library_name: transformers
3
+ tags:
4
+ - peft
5
+ - qlora
6
+ - knowledge-distillation
7
+ - gsm8k
8
+ - metamathqa
9
+ - phi-2
10
+ - math
11
+ datasets:
12
+ - meta-math/MetaMathQA
13
+ - openai/gsm8k
14
+ language:
15
+ - en
16
+ base_model:
17
+ - microsoft/phi-2
18
+ pipeline_tag: text-generation
19
  ---
20
 
21
+ # Model Card for Qwen2.5-Math-Instruct-Distill-Phi2-2.5K-Mixed
 
 
22
 
23
+ This model is a version of `microsoft/phi-2` that has been fine-tuned using knowledge distillation. The goal was to teach the compact and efficient Phi-2 "student" model to replicate the step-by-step mathematical reasoning style of the powerful `Qwen/Qwen2.5-Math-7B-Instruct` "teacher" model.
24
 
25
+ This is the **V2** of this project, featuring a significantly larger and more diverse dataset than the V1 model, resulting in more robust reasoning and the ability to correctly generate LaTeX math notation.
26
 
27
  ## Model Details
28
 
29
  ### Model Description
30
 
31
+ This project explores **style distillation**, where a smaller model is trained not just on correct answers, but on the *process* and *format* of a larger, more capable model's output. The primary objective was to transfer the teacher's verbose, step-by-step reasoning methodology, including its use of LaTeX, to the student model.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
 
33
+ The model was fine-tuned using the QLoRA method for high memory efficiency, making it possible to train on consumer-grade hardware.
34
 
35
+ - **Developed by:** Dery Ferd
36
+ - **Model type:** Causal Decoder-Only Transformer
37
+ - **Language(s) (NLP):** English
38
+ - **License:** MIT
39
+ - **Finetuned from model:** `microsoft/phi-2`
40
+ - **Demo:** [Link to your Gradio Demo if you deploy it on Spaces]
41
 
42
+ ## How to Get Started with the Model
 
 
 
 
 
 
 
 
 
 
43
 
44
+ Use the code below to load the fine-tuned model adapter and run inference.
45
 
46
+ ```python
47
+ import torch
48
+ from peft import PeftModel
49
+ from transformers import AutoModelForCausalLM, AutoTokenizer
50
 
51
+ # Your repository ID
52
+ repo_id = "DeryFerd/Qwen2.5-Math-Instruct-Distill-Phi2-2.5K-Mixed"
53
+ base_model_id = "microsoft/phi-2"
54
 
55
+ # Load the base model
56
+ base_model = AutoModelForCausalLM.from_pretrained(
57
+ base_model_id,
58
+ torch_dtype=torch.float16,
59
+ device_map="auto",
60
+ trust_remote_code=True
61
+ )
62
 
63
+ # Load the tokenizer
64
+ tokenizer = AutoTokenizer.from_pretrained(repo_id)
65
 
66
+ # Load the PEFT model by merging the adapter into the base model
67
+ model = PeftModel.from_pretrained(base_model, repo_id)
68
+ model.eval()
69
 
70
+ # --- Run Inference ---
71
+ instruction = "A right-angled triangle has two shorter sides with lengths of 8 cm and 15 cm. What is the length of the longest side (the hypotenuse)? Use the Pythagorean theorem (a^2 + b^2 = c^2) to solve it."
72
+ prompt = f"Instruct: {instruction.strip()}\nOutput:"
73
 
74
+ inputs = tokenizer(prompt, return_tensors="pt", return_attention_mask=False).to(model.device)
75
 
76
+ with torch.no_grad():
77
+ outputs = model.generate(**inputs, max_new_tokens=300, pad_token_id=tokenizer.eos_token_id)
78
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
79
 
80
+ final_answer = response.split("Output:")[1].strip()
81
+ print(final_answer)
82
+ ```
83
 
84
  ## Training Details
85
 
86
  ### Training Data
87
 
88
+ The model was trained on a combined dataset of **2,500 examples** created specifically for this distillation task. The dataset is a mix of two sources to improve robustness and mathematical formatting capabilities:
89
+ - **2,000 examples from GSM8K:** A popular dataset of grade-school math word problems that require multi-step arithmetic reasoning.
90
+ - **500 examples from MetaMathQA:** A high-quality dataset covering a broader range of math topics, including algebra and more complex notation.
91
 
92
+ The answers ("response" column) were not the original dataset answers, but were synthetically generated by the `Qwen/Qwen2.5-Math-7B-Instruct` teacher model to provide high-quality, step-by-step reasoning examples for the student to learn from.
93
 
94
  ### Training Procedure
95
 
96
+ The model was fine-tuned using the SFTTrainer from the TRL library on a single NVIDIA T4 GPU in a Kaggle Notebook environment.
 
 
 
 
97
 
98
+ #### Preprocessing
99
+ Each data sample was formatted into a single string using the following template, which is suitable for the Phi-2 model: `Instruct: {instruction}\nOutput: {response}<|endoftext|>`
100
 
101
  #### Training Hyperparameters
102
+ - **Framework:** Transformers, PEFT (QLoRA), TRL
103
+ - **Quantization:** 4-bit `nf4` with `float16` compute dtype
104
+ - **LoRA `r`:** 16
105
+ - **LoRA `alpha`:** 32
106
+ - **LoRA `target_modules`:** `["q_proj", "k_proj", "v_proj", "dense"]`
107
+ - **Batch Size:** `per_device_train_batch_size` = 1, `gradient_accumulation_steps` = 8 (effective batch size of 8)
108
+ - **Optimizer:** `paged_adamw_8bit`
109
+ - **Learning Rate:** `2e-4` with a constant scheduler
110
+ - **Epochs:** 3
111
+ - **precision:** `fp16`
112
 
113
  ## Evaluation
114
 
115
+ ### Results & Summary
116
+ Evaluation was performed qualitatively by comparing the outputs of three models (Base Phi-2, this fine-tuned Student model, and the Teacher Qwen model) on a variety of math problems.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
117
 
118
+ - **Success in Style Transfer:** The model successfully adopted the verbose, step-by-step reasoning style of the teacher model. This is a significant improvement over the base Phi-2, which tends to provide short, direct answers without showing its work.
119
+ - **Success in LaTeX Generation:** A key failure of the V1 model was its inability to correctly generate LaTeX math notation. This V2 model, trained on a more diverse dataset, **successfully generates LaTeX** for equations, fractions, and exponents, mirroring the teacher's output format.
120
+ - **Efficiency Gains:** As expected from distillation, this student model provides its detailed, high-quality answers **significantly faster** and using **fewer generated tokens** than the much larger 7B teacher model, demonstrating the core benefit of this project.
121
 
122
+ ## Bias, Risks, and Limitations
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
123
 
124
+ This is an experimental model trained for a specific purpose and has several limitations:
125
+ - It inherits any biases present in the base `microsoft/phi-2` and teacher `Qwen/Qwen2.5-Math-7B-Instruct` models.
126
+ - Its primary training objective was **style imitation**, not necessarily improving raw mathematical accuracy. It may produce plausible-sounding but mathematically incorrect reasoning.
127
+ - Its knowledge is limited to the patterns within the 2,500 training examples. It may not generalize well to math problems far outside the scope of GSM8K and MetaMathQA (e.g., advanced calculus).
128
 
129
+ This model should **not** be used for production or critical applications. It is intended as a portfolio project to demonstrate the effectiveness of knowledge distillation.
130
 
131
+ ## More Information [LinkedIn]
132
 
133
+ [[My LinkedIn]](https://www.linkedin.com/in/deryferdikaoktoriansah/)
134
 
135
+ ## Developer Contact [Github]
136
 
137
+ [[My Github]](https://github.com/DeryFerd)