wildanaziz commited on
Commit
162aa7b
·
verified ·
1 Parent(s): 270e900

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -160
README.md CHANGED
@@ -1,210 +1,99 @@
1
  ---
2
  base_model: aitfindonesia/Bakti-8B-Base
3
  library_name: peft
 
4
  pipeline_tag: text-generation
5
  tags:
6
  - base_model:adapter:aitfindonesia/Bakti-8B-Base
7
  - lora
8
  - sft
9
  - transformers
10
- - trl
11
  - unsloth
 
 
 
12
  ---
13
 
14
- # Model Card for Model ID
15
-
16
- <!-- Provide a quick summary of what the model is/does. -->
17
-
18
-
19
 
20
  ## Model Details
21
 
22
  ### Model Description
23
 
24
- <!-- Provide a longer summary of what this model is. -->
25
-
26
-
27
 
28
- - **Developed by:** [More Information Needed]
29
- - **Funded by [optional]:** [More Information Needed]
30
- - **Shared by [optional]:** [More Information Needed]
31
- - **Model type:** [More Information Needed]
32
- - **Language(s) (NLP):** [More Information Needed]
33
- - **License:** [More Information Needed]
34
- - **Finetuned from model [optional]:** [More Information Needed]
35
 
36
- ### Model Sources [optional]
37
-
38
- <!-- Provide the basic links for the model. -->
39
-
40
- - **Repository:** [More Information Needed]
41
- - **Paper [optional]:** [More Information Needed]
42
- - **Demo [optional]:** [More Information Needed]
43
 
44
  ## Uses
45
 
46
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
47
-
48
  ### Direct Use
49
 
50
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
51
-
52
- [More Information Needed]
53
-
54
- ### Downstream Use [optional]
55
-
56
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
57
-
58
- [More Information Needed]
59
 
60
  ### Out-of-Scope Use
61
 
62
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
63
-
64
- [More Information Needed]
65
-
66
- ## Bias, Risks, and Limitations
67
-
68
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
69
-
70
- [More Information Needed]
71
-
72
- ### Recommendations
73
-
74
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
75
-
76
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
77
-
78
- ## How to Get Started with the Model
79
-
80
- Use the code below to get started with the model.
81
-
82
- [More Information Needed]
83
 
84
  ## Training Details
85
 
86
  ### Training Data
87
 
88
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
89
-
90
- [More Information Needed]
 
91
 
92
  ### Training Procedure
93
 
94
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
95
-
96
- #### Preprocessing [optional]
97
-
98
- [More Information Needed]
99
-
100
 
101
  #### Training Hyperparameters
102
 
103
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
104
-
105
- #### Speeds, Sizes, Times [optional]
 
 
 
 
 
 
 
 
106
 
107
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
108
-
109
- [More Information Needed]
110
 
111
  ## Evaluation
112
 
113
- <!-- This section describes the evaluation protocols and provides the results. -->
114
-
115
- ### Testing Data, Factors & Metrics
116
-
117
- #### Testing Data
118
-
119
- <!-- This should link to a Dataset Card if possible. -->
120
-
121
- [More Information Needed]
122
-
123
- #### Factors
124
-
125
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
126
-
127
- [More Information Needed]
128
-
129
- #### Metrics
130
-
131
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
132
-
133
- [More Information Needed]
134
-
135
  ### Results
136
 
137
- [More Information Needed]
138
-
139
- #### Summary
140
-
141
-
142
-
143
- ## Model Examination [optional]
144
-
145
- <!-- Relevant interpretability work for the model goes here -->
146
 
147
- [More Information Needed]
148
 
149
  ## Environmental Impact
150
 
151
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
152
-
153
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
154
-
155
- - **Hardware Type:** [More Information Needed]
156
- - **Hours used:** [More Information Needed]
157
- - **Cloud Provider:** [More Information Needed]
158
- - **Compute Region:** [More Information Needed]
159
- - **Carbon Emitted:** [More Information Needed]
160
-
161
- ## Technical Specifications [optional]
162
-
163
- ### Model Architecture and Objective
164
-
165
- [More Information Needed]
166
-
167
- ### Compute Infrastructure
168
-
169
- [More Information Needed]
170
-
171
- #### Hardware
172
-
173
- [More Information Needed]
174
-
175
- #### Software
176
-
177
- [More Information Needed]
178
-
179
- ## Citation [optional]
180
-
181
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
182
-
183
- **BibTeX:**
184
-
185
- [More Information Needed]
186
-
187
- **APA:**
188
-
189
- [More Information Needed]
190
-
191
- ## Glossary [optional]
192
-
193
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
194
-
195
- [More Information Needed]
196
-
197
- ## More Information [optional]
198
-
199
- [More Information Needed]
200
-
201
- ## Model Card Authors [optional]
202
-
203
- [More Information Needed]
204
-
205
- ## Model Card Contact
206
 
207
- [More Information Needed]
208
- ### Framework versions
209
 
210
- - PEFT 0.18.0
 
 
 
 
1
  ---
2
  base_model: aitfindonesia/Bakti-8B-Base
3
  library_name: peft
4
+ license: apache-2.0
5
  pipeline_tag: text-generation
6
  tags:
7
  - base_model:adapter:aitfindonesia/Bakti-8B-Base
8
  - lora
9
  - sft
10
  - transformers
 
11
  - unsloth
12
+ - multi-turn
13
+ - chatbot
14
+ - indonesian
15
  ---
16
 
17
+ # Model Card for SFT-Bakti-8B-Base-MultiTurn-Chatbot
 
 
 
 
18
 
19
  ## Model Details
20
 
21
  ### Model Description
22
 
23
+ This model is a fine-tuned version of **[aitfindonesia/Bakti-8B-Base]** designed specifically for **multi-turn conversational capabilities** in the Indonesian language. It was trained using the **Unsloth** library for faster and memory-efficient training, utilizing LoRA (Low-Rank Adaptation).
 
 
24
 
25
+ The model is optimized to handle context retention across multiple turns of conversation, making it suitable for interview simulations, customer support, and general-purpose Indonesian assistants.
 
 
 
 
 
 
26
 
27
+ - **Developed by:** DTP Fine Tuning Team
28
+ - **Model type:** Causal Language Model (Fine-tuned Qwen2/3 architecture)
29
+ - **Language(s) (NLP):** Indonesian
30
+ - **License:** Apache 2.0
31
+ - **Finetuned from model:** aitfindonesia/Bakti-8B-Base
 
 
32
 
33
  ## Uses
34
 
 
 
35
  ### Direct Use
36
 
37
+ The model is designed for:
38
+ - Multi-turn chat interactions in Indonesian.
39
+ - Question Answering (QA) requiring context from previous turns.
40
+ - Roleplay interactions (e.g., interview scenarios).
 
 
 
 
 
41
 
42
  ### Out-of-Scope Use
43
 
44
+ - The model should not be used for generating factually accurate data without RAG (Retrieval Augmented Generation) as hallucinations are possible.
45
+ - Not intended for code generation tasks.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
 
47
  ## Training Details
48
 
49
  ### Training Data
50
 
51
+ **Dataset:** `dtp-fine-tuning/dtp-multiturn-interview-valid-15k`
52
+ - **Split:** Train (90%) / Test (10%)
53
+ - **Format:** Multi-turn conversation format.
54
+ - **Max Length:** 2048 tokens
55
 
56
  ### Training Procedure
57
 
58
+ The model was fine-tuned using **Unsloth** on a single NVIDIA A100 (80GB) GPU. It utilizes 4-bit quantization (NF4) to reduce memory usage while maintaining performance via QLoRA.
 
 
 
 
 
59
 
60
  #### Training Hyperparameters
61
 
62
+ - **Training regime:** QLoRA (4-bit quantization with FP16 precision)
63
+ - **Optimizer:** AdamW 8-bit
64
+ - **Learning Rate:** $2 \times 10^{-5}$
65
+ - **Scheduler:** Linear with 5% warmup
66
+ - **Batch Size:** 8 per device (Gradient Accumulation: 4)
67
+ - **Epochs:** 2
68
+ - **LoRA Config:**
69
+ - Rank ($r$): 16
70
+ - Alpha ($\alpha$): 32
71
+ - Dropout: 0.05
72
+ - Target Modules: `q_proj`, `k_proj`, `v_proj`, `o_proj`, `gate_proj`, `up_proj`, `down_proj`
73
 
74
+ #### Hardware
75
+ - **GPU:** NVIDIA A100 80GB PCIe
76
+ - **VRAM Usage:** Peak allocation approx. 19GB (23% utilization) due to 4-bit loading.
77
 
78
  ## Evaluation
79
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
80
  ### Results
81
 
82
+ The model demonstrates strong convergence on the multi-turn dataset.
83
+ - **Final Train Loss:** $\approx 0.42$
84
+ - **Final Eval Loss:** $\approx 0.41$
 
 
 
 
 
 
85
 
86
+ *Note: The model outperforms the standard Qwen3-8B baseline on this specific Indonesian dataset, achieving lower loss values faster.*
87
 
88
  ## Environmental Impact
89
 
90
+ - **Hardware Type:** NVIDIA A100 80GB
91
+ - **Compute Region:** asia-east1
92
+ - **Carbon Emitted:** 0.31
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
93
 
94
+ ## Framework Versions
 
95
 
96
+ - Unsloth
97
+ - PEFT
98
+ - Transformers
99
+ - TRL