disham993 commited on
Commit
881e35d
·
verified ·
1 Parent(s): 882be29

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -19
README.md CHANGED
@@ -8,56 +8,80 @@ tags:
8
  datasets:
9
  - disham993/ElectricalDeviceFeedbackBalanced
10
  metrics:
11
- - epoch: 1.0
12
- - eval_f1: 0.8665314714124963
13
- - eval_accuracy: 0.8683431952662722
14
- - eval_runtime: 2.6138
15
- - eval_samples_per_second: 517.252
16
- - eval_steps_per_second: 16.451
17
  ---
18
 
19
- # disham993/electrical-classification-bert-large-uncased
20
 
21
  ## Model description
22
 
23
- This model is fine-tuned from [google-bert/bert-large-uncased](https://huggingface.co/google-bert/bert-large-uncased) for text-classification tasks.
24
 
25
  ## Training Data
26
 
27
- The model was trained on the disham993/ElectricalDeviceFeedbackBalanced dataset.
28
 
29
  ## Model Details
30
- - **Base Model:** google-bert/bert-large-uncased
31
  - **Task:** text-classification
32
  - **Language:** en
33
- - **Dataset:** disham993/ElectricalDeviceFeedbackBalanced
34
 
35
  ## Training procedure
36
 
37
  ### Training hyperparameters
38
- [Please add your training hyperparameters here]
 
 
 
 
 
 
 
39
 
40
  ## Evaluation results
41
 
42
- ### Metrics\n- epoch: 1.0\n- eval_f1: 0.8665314714124963\n- eval_accuracy: 0.8683431952662722\n- eval_runtime: 2.6138\n- eval_samples_per_second: 517.252\n- eval_steps_per_second: 16.451
 
 
 
 
 
 
43
 
44
  ## Usage
45
 
 
 
 
46
  ```python
47
- from transformers import AutoTokenizer, AutoModel
 
 
 
 
 
48
 
49
- tokenizer = AutoTokenizer.from_pretrained("disham993/electrical-classification-bert-large-uncased")
50
- model = AutoModel.from_pretrained("disham993/electrical-classification-bert-large-uncased")
 
51
  ```
52
 
53
  ## Limitations and bias
54
 
55
- [Add any known limitations or biases of the model]
 
 
56
 
57
  ## Training Infrastructure
58
 
59
- [Add details about training infrastructure used]
60
 
61
  ## Last update
62
 
63
- 2025-01-05
 
8
  datasets:
9
  - disham993/ElectricalDeviceFeedbackBalanced
10
  metrics:
11
+ - epoch: 5.0
12
+ - eval_f1: 0.8928
13
+ - eval_accuracy: 0.8897
14
+ - eval_runtime: 2.5806
15
+ - eval_samples_per_second: 523.905
16
+ - eval_steps_per_second: 16.663
17
  ---
18
 
19
+ # electrical-classification-bert-large-uncased
20
 
21
  ## Model description
22
 
23
+ This model is fine-tuned from [google-bert/bert-large-uncased](https://huggingface.co/google-bert/bert-large-uncased) for text classification tasks, specifically sentiment analysis of customer feedback on electrical devices - circuit breakers, transformers, smart meters, inverters, solar panels, power strips etc. The model has been optimized to classify sentiments into categories such as Positive, Negative, Neutral, and Mixed with high precision and recall, making it ideal for analyzing product reviews, customer surveys, and other feedback to derive actionable insights.
24
 
25
  ## Training Data
26
 
27
+ The model was trained on the [disham993/ElectricalDeviceFeedbackBalanced](https://huggingface.co/datasets/disham993/ElectricalDeviceFeedbackBalanced) dataset.
28
 
29
  ## Model Details
30
+ - **Base Model:** [google-bert/bert-large-uncased](https://huggingface.co/google-bert/bert-large-uncased)
31
  - **Task:** text-classification
32
  - **Language:** en
33
+ - **Dataset:** [disham993/ElectricalDeviceFeedbackBalanced](https://huggingface.co/datasets/disham993/ElectricalDeviceFeedbackBalanced)
34
 
35
  ## Training procedure
36
 
37
  ### Training hyperparameters
38
+
39
+ The model was fine-tuned using the following hyperparameters:
40
+
41
+ - **Evaluation Strategy:** epoch
42
+ - **Learning Rate:** 1e-5
43
+ - **Batch Size:** 32 (for both training and evaluation)
44
+ - **Number of Epochs:** 5
45
+ - **Weight Decay:** 0.01
46
 
47
  ## Evaluation results
48
 
49
+ The following metrics were achieved during evaluation:
50
+
51
+ - **F1 Score:** 0.8928
52
+ - **Accuracy:** 0.8897
53
+ - **eval_runtime**: 2.5806
54
+ - **eval_samples_per_second**: 523.905
55
+ - **eval_steps_per_second**: 16.663
56
 
57
  ## Usage
58
 
59
+ You can use this model for Sentiment Analysis of the Electrical Device Feedback as follows:
60
+
61
+
62
  ```python
63
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
64
+
65
+ model_name = "disham993/electrical-classification-bert-large"
66
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
67
+ model = AutoModelForSequenceClassification.from_pretrained(model_name)
68
+ nlp = pipeline("text-classification", model=model, tokenizer=tokenizer)
69
 
70
+ text = "The new washing machine is efficient but produces a bit of noise."
71
+ classification_results = nlp(text)
72
+ print(classification_results)
73
  ```
74
 
75
  ## Limitations and bias
76
 
77
+ The dataset includes synthetic data generated using Llama 3.1:8b, and despite careful optimization and prompt engineering, the model is not immune to errors in labeling. Additionally, as LLM technology is still in its early stages, there may be inherent inaccuracies or biases in the generated data that can impact the model's performance.
78
+
79
+ This model is intended for research and educational purposes only, and users are encouraged to validate results before applying them to critical applications.
80
 
81
  ## Training Infrastructure
82
 
83
+ For a complete guide covering the entire process - from data tokenization to pushing the model to the Hugging Face Hub - please refer to the [GitHub repository](https://github.com/di37/classification-electrical-feedback-finetuning).
84
 
85
  ## Last update
86
 
87
+ 2025-01-05