yeongbin05 commited on
Commit
ba1596a
·
1 Parent(s): 219137e

Initial release: FinBERT LoRA adapter for operational KPI sentiment

Browse files
Files changed (1) hide show
  1. README.md +93 -153
README.md CHANGED
@@ -1,206 +1,146 @@
1
  ---
2
  base_model: ProsusAI/finbert
3
  library_name: peft
 
4
  tags:
5
- - base_model:adapter:ProsusAI/finbert
 
 
6
  - lora
7
  - transformers
 
 
8
  ---
9
 
10
- # Model Card for Model ID
11
 
12
- <!-- Provide a quick summary of what the model is/does. -->
13
 
14
 
 
15
 
16
- ## Model Details
 
 
 
17
 
18
- ### Model Description
19
 
20
- <!-- Provide a longer summary of what this model is. -->
21
 
 
22
 
 
23
 
24
- - **Developed by:** [More Information Needed]
25
- - **Funded by [optional]:** [More Information Needed]
26
- - **Shared by [optional]:** [More Information Needed]
27
- - **Model type:** [More Information Needed]
28
- - **Language(s) (NLP):** [More Information Needed]
29
- - **License:** [More Information Needed]
30
- - **Finetuned from model [optional]:** [More Information Needed]
31
-
32
- ### Model Sources [optional]
33
-
34
- <!-- Provide the basic links for the model. -->
35
-
36
- - **Repository:** [More Information Needed]
37
- - **Paper [optional]:** [More Information Needed]
38
- - **Demo [optional]:** [More Information Needed]
39
-
40
- ## Uses
41
-
42
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
43
-
44
- ### Direct Use
45
-
46
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
47
-
48
- [More Information Needed]
49
-
50
- ### Downstream Use [optional]
51
-
52
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
53
-
54
- [More Information Needed]
55
-
56
- ### Out-of-Scope Use
57
-
58
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
59
-
60
- [More Information Needed]
61
-
62
- ## Bias, Risks, and Limitations
63
-
64
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
65
-
66
- [More Information Needed]
67
-
68
- ### Recommendations
69
-
70
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
71
-
72
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
73
-
74
- ## How to Get Started with the Model
75
-
76
- Use the code below to get started with the model.
77
-
78
- [More Information Needed]
79
-
80
- ## Training Details
81
-
82
- ### Training Data
83
-
84
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
85
-
86
- [More Information Needed]
87
-
88
- ### Training Procedure
89
-
90
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
91
-
92
- #### Preprocessing [optional]
93
-
94
- [More Information Needed]
95
-
96
-
97
- #### Training Hyperparameters
98
-
99
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
100
-
101
- #### Speeds, Sizes, Times [optional]
102
-
103
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
104
-
105
- [More Information Needed]
106
-
107
- ## Evaluation
108
-
109
- <!-- This section describes the evaluation protocols and provides the results. -->
110
-
111
- ### Testing Data, Factors & Metrics
112
-
113
- #### Testing Data
114
-
115
- <!-- This should link to a Dataset Card if possible. -->
116
-
117
- [More Information Needed]
118
-
119
- #### Factors
120
-
121
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
122
-
123
- [More Information Needed]
124
-
125
- #### Metrics
126
-
127
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
128
-
129
- [More Information Needed]
130
-
131
- ### Results
132
 
133
- [More Information Needed]
134
 
135
- #### Summary
136
 
 
 
 
 
 
 
137
 
 
138
 
139
- ## Model Examination [optional]
 
 
140
 
141
- <!-- Relevant interpretability work for the model goes here -->
142
 
143
- [More Information Needed]
 
 
 
 
 
144
 
145
- ## Environmental Impact
146
 
147
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
148
 
149
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
150
 
151
- - **Hardware Type:** [More Information Needed]
152
- - **Hours used:** [More Information Needed]
153
- - **Cloud Provider:** [More Information Needed]
154
- - **Compute Region:** [More Information Needed]
155
- - **Carbon Emitted:** [More Information Needed]
156
 
157
- ## Technical Specifications [optional]
158
 
159
- ### Model Architecture and Objective
 
 
160
 
161
- [More Information Needed]
162
 
163
- ### Compute Infrastructure
164
 
165
- [More Information Needed]
 
 
 
166
 
167
- #### Hardware
 
168
 
169
- [More Information Needed]
170
 
171
- #### Software
 
 
 
 
 
 
172
 
173
- [More Information Needed]
174
 
175
- ## Citation [optional]
 
176
 
177
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
 
178
 
179
- **BibTeX:**
180
 
181
- [More Information Needed]
182
 
183
- **APA:**
184
 
185
- [More Information Needed]
 
 
 
186
 
187
- ## Glossary [optional]
 
 
 
188
 
189
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
 
190
 
191
- [More Information Needed]
192
 
193
- ## More Information [optional]
 
 
 
 
194
 
195
- [More Information Needed]
196
 
197
- ## Model Card Authors [optional]
 
 
 
198
 
199
- [More Information Needed]
200
 
201
- ## Model Card Contact
 
202
 
203
- [More Information Needed]
204
- ### Framework versions
205
 
206
- - PEFT 0.18.0
 
 
1
  ---
2
  base_model: ProsusAI/finbert
3
  library_name: peft
4
+ license: apache-2.0
5
  tags:
6
+ - sentiment-analysis
7
+ - finance
8
+ - operational-metrics
9
  - lora
10
  - transformers
11
+ - domain-adaptation
12
+ - bias-correction
13
  ---
14
 
15
+ # FinBERT LoRA Adapter for Operational Metrics Sentiment
16
 
17
+ This repository provides a **LoRA adapter** for `ProsusAI/finbert` that mitigates a domain bias commonly observed in financial sentiment models.
18
 
19
 
20
+ ## Motivation
21
 
22
+ Standard FinBERT models are heavily trained on financial news and reports.
23
+ As a result, phrases containing words such as **"down"**, **"reduced"**, or **"failure"**
24
+ are often interpreted as **negative signals**,
25
+ even when they describe improvements in operational or quality-related metrics.
26
 
27
+ However, in manufacturing, operations, and enterprise contexts, statements like:
28
 
29
+ > *"Failure rate down 10% QoQ"*
30
 
31
+ represent **positive operational improvements**.
32
 
33
+ This adapter reduces that semantic conflict inside the model, without rule-based postprocessing.
34
 
35
+ ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
 
37
+ ## What This Adapter Does
38
 
39
+ Classifies **decreases in bad operational metrics** as **Positive**
40
 
41
+ **Quality / Operations KPIs**
42
+ - defect rate
43
+ - error rate
44
+ - failure rate
45
+ - scrap rate
46
+ - return rate
47
 
48
+ ✅ Preserves **negative sentiment** for genuine financial deterioration
49
 
50
+ - revenue down
51
+ - profit reduced
52
+ - sales declined
53
 
54
+ ### Base vs Adapter (sample inference, local run)
55
 
56
+ | Text | Base FinBERT | +Adapter (LoRA) |
57
+ |---|---:|---:|
58
+ | Failure rate down 10% QoQ | **Negative (0.9640)** | **Positive (0.9000)** |
59
+ | Defect rate reduced | **Neutral (0.6343)** | **Positive (0.7902)** |
60
+ | Revenue reduced by 20% | **Negative (0.9690)** | **Negative (0.9469)** |
61
+ | The production line was audited last week. | **Neutral (0.5524)** | **Positive (0.6094)** |
62
 
63
+ The adapter consistently reclassifies decreases in negative operational KPIs as **Positive**, while preserving **Negative** sentiment for genuine financial deterioration.
64
 
65
+ These examples are based on a small set of manually selected sentences and are intended for illustrative comparison.
66
 
 
67
 
 
 
 
 
 
68
 
69
+ ## Label Mapping
70
 
71
+ 0 positive
72
+ 1 → negative
73
+ 2 → neutral
74
 
75
+ ---
76
 
77
+ ## How to Use
78
 
79
+ ```python
80
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
81
+ from peft import PeftModel
82
+ import torch
83
 
84
+ base = "ProsusAI/finbert"
85
+ adapter = "yahoyaho13/finbert-lora-operational-sentiment"
86
 
87
+ device = "cuda" if torch.cuda.is_available() else "cpu"
88
 
89
+ tokenizer = AutoTokenizer.from_pretrained(base)
90
+ base_model = AutoModelForSequenceClassification.from_pretrained(
91
+ base,
92
+ num_labels=3,
93
+ id2label={0: "positive", 1: "negative", 2: "neutral"},
94
+ label2id={"positive": 0, "negative": 1, "neutral": 2},
95
+ )
96
 
97
+ model = PeftModel.from_pretrained(base_model, adapter).to(device).eval()
98
 
99
+ text = "Failure rate down 10% QoQ"
100
+ inputs = tokenizer(text, return_tensors="pt").to(device)
101
 
102
+ with torch.inference_mode():
103
+ probs = torch.softmax(model(**inputs).logits, dim=-1)
104
 
105
+ print({base_model.config.id2label[i]: round(float(probs[0, i]), 4) for i in range(3)})
106
 
 
107
 
108
+ ```
109
 
110
+ ## Training Summary
111
+ Base model: ProsusAI/finbert
112
+ Fine-tuning method: LoRA (PEFT)
113
+ Target modules: all-linear
114
 
115
+ LoRA configuration:
116
+ - r = 16
117
+ - lora_alpha = 64
118
+ - dropout = 0.05
119
 
120
+ Dataset size: ~170 short operational / financial statements
121
+ Hardware: NVIDIA GTX 1060 6GB (local training)
122
 
123
+ ---
124
 
125
+ ## Known Limitations
126
+ - Trained on a small, domain-specific dataset.
127
+ - Not intended as a general-purpose financial sentiment replacement.
128
+ - Best suited for short operational or KPI-style sentences.
129
+ - May over-predict **Positive** sentiment for some neutral operational statements due to limited training data.
130
 
131
+ ---
132
 
133
+ ## Intended Use
134
+ - Manufacturing and quality reporting
135
+ - Enterprise KPI commentary
136
+ - Mixed finance/operations text where **rate decreases imply improvement**
137
 
138
+ ---
139
 
140
+ ## License
141
+ Apache-2.0 (inherits base model license)
142
 
143
+ ---
 
144
 
145
+ ## Author
146
+ - Hugging Face: **yahoyaho13**