logasanjeev commited on
Commit
89e55f2
·
verified ·
1 Parent(s): 36b83ba

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -40
README.md CHANGED
@@ -37,7 +37,7 @@ base_model:
37
  - google-bert/bert-base-uncased
38
  base_model_relation: finetune
39
  model-index:
40
- - name: Emotion Analyzer Bert
41
  results:
42
  - task:
43
  type: multi-label-classification
@@ -156,10 +156,10 @@ model-index:
156
  source:
157
  name: Kaggle Evaluation Notebook
158
  url: >-
159
- https://www.kaggle.com/code/ravindranlogasanjeev/evaluation-logasanjeev-emotions-analyzer-bert/notebook
160
  ---
161
 
162
- # Emotions Analyzer Bert
163
 
164
  Fine-tuned [BERT-base-uncased](https://huggingface.co/bert-base-uncased) on [GoEmotions](https://huggingface.co/datasets/google-research-datasets/go_emotions) for multi-label classification (28 emotions). This updated version includes improved Macro F1, ONNX support for efficient inference, and visualizations for better interpretability.
165
 
@@ -176,7 +176,7 @@ Fine-tuned [BERT-base-uncased](https://huggingface.co/bert-base-uncased) on [GoE
176
 
177
  ## Try It Out
178
 
179
- For accurate predictions with optimized thresholds, use the [Gradio demo](https://logasanjeev-emotions-analyzer-bert-demo.hf.space). The demo now includes preprocessed text and the top 5 predicted emotions, in addition to thresholded predictions. Example predictions:
180
 
181
  - **Input**: "I’m thrilled to win this award! 😄"
182
  - **Output**: `excitement: 0.5836, joy: 0.5290`
@@ -194,7 +194,7 @@ For accurate predictions with optimized thresholds, use the [Gradio demo](https:
194
  - **Hamming Loss**: 0.0377
195
  - **Avg Positive Predictions**: 1.4789
196
 
197
- For a detailed evaluation, including class-wise accuracy, precision, recall, F1, MCC, support, and thresholds, along with visualizations, check out the [Kaggle notebook](https://www.kaggle.com/code/ravindranlogasanjeev/evaluation-logasanjeev-emotions-analyzer-bert/notebook).
198
 
199
  ### Class-Wise Performance
200
 
@@ -256,29 +256,26 @@ The easiest way to use the model with PyTorch is to programmatically fetch and u
256
  Run the following Python script to download `inference.py` and make predictions:
257
 
258
  ```python
259
- !pip install transformers torch huggingface_hub emoji -q
260
 
261
- import shutil
262
- import os
263
  from huggingface_hub import hf_hub_download
264
- from importlib import import_module
265
 
266
- repo_id = "logasanjeev/emotions-analyzer-bert"
267
- local_file = hf_hub_download(repo_id=repo_id, filename="inference.py")
268
 
269
- current_dir = os.getcwd()
270
- destination = os.path.join(current_dir, "inference.py")
271
- shutil.copy(local_file, destination)
272
-
273
- inference_module = import_module("inference")
274
- predict_emotions = inference_module.predict_emotions
275
 
 
276
  text = "I’m thrilled to win this award! 😄"
277
- result, processed = predict_emotions(text)
278
- print(f"Input: {text}")
279
- print(f"Processed: {processed}")
280
- print("Predicted Emotions:")
281
- print(result)
282
  ```
283
 
284
  #### Expected Output:
@@ -323,29 +320,26 @@ For faster and more efficient inference using ONNX, you can use `onnx_inference.
323
  Run the following Python script to download `onnx_inference.py` and make predictions:
324
 
325
  ```python
326
- !pip install transformers onnxruntime huggingface_hub emoji numpy -q
327
 
328
- import shutil
329
- import os
330
  from huggingface_hub import hf_hub_download
331
- from importlib import import_module
332
 
333
- repo_id = "logasanjeev/emotions-analyzer-bert"
334
- local_file = hf_hub_download(repo_id=repo_id, filename="onnx_inference.py")
335
 
336
- current_dir = os.getcwd()
337
- destination = os.path.join(current_dir, "onnx_inference.py")
338
- shutil.copy(local_file, destination)
339
-
340
- onnx_inference_module = import_module("onnx_inference")
341
- predict_emotions = onnx_inference_module.predict_emotions
342
 
 
343
  text = "I’m thrilled to win this award! 😄"
344
- result, processed = predict_emotions(text)
345
- print(f"Input: {text}")
346
- print(f"Processed: {processed}")
347
- print("Predicted Emotions:")
348
- print(result)
349
  ```
350
 
351
  #### Expected Output:
@@ -407,7 +401,7 @@ def preprocess_text(text):
407
  text = text.lower()
408
  return text
409
 
410
- repo_id = "logasanjeev/emotions-analyzer-bert"
411
  model = BertForSequenceClassification.from_pretrained(repo_id)
412
  tokenizer = BertTokenizer.from_pretrained(repo_id)
413
 
 
37
  - google-bert/bert-base-uncased
38
  base_model_relation: finetune
39
  model-index:
40
+ - name: Bert Emotion Classifier
41
  results:
42
  - task:
43
  type: multi-label-classification
 
156
  source:
157
  name: Kaggle Evaluation Notebook
158
  url: >-
159
+ https://www.kaggle.com/code/ravindranlogasanjeev/evaluation-logasanjeev-bert-emotion-classifier/notebook
160
  ---
161
 
162
+ # Bert Emotion Classifier
163
 
164
  Fine-tuned [BERT-base-uncased](https://huggingface.co/bert-base-uncased) on [GoEmotions](https://huggingface.co/datasets/google-research-datasets/go_emotions) for multi-label classification (28 emotions). This updated version includes improved Macro F1, ONNX support for efficient inference, and visualizations for better interpretability.
165
 
 
176
 
177
  ## Try It Out
178
 
179
+ For accurate predictions with optimized thresholds, use the [Gradio demo](https://logasanjeev-bert-emotion-classifier-demo.hf.space). The demo now includes preprocessed text and the top 5 predicted emotions, in addition to thresholded predictions. Example predictions:
180
 
181
  - **Input**: "I’m thrilled to win this award! 😄"
182
  - **Output**: `excitement: 0.5836, joy: 0.5290`
 
194
  - **Hamming Loss**: 0.0377
195
  - **Avg Positive Predictions**: 1.4789
196
 
197
+ For a detailed evaluation, including class-wise accuracy, precision, recall, F1, MCC, support, and thresholds, along with visualizations, check out the [Kaggle notebook](https://www.kaggle.com/code/ravindranlogasanjeev/evaluation-logasanjeev-bert-emotion-classifier/notebook).
198
 
199
  ### Class-Wise Performance
200
 
 
256
  Run the following Python script to download `inference.py` and make predictions:
257
 
258
  ```python
259
+ # pip install transformers torch huggingface_hub emoji -q
260
 
 
 
261
  from huggingface_hub import hf_hub_download
262
+ import importlib.util
263
 
264
+ # download inference script
265
+ path = hf_hub_download(repo_id="logasanjeev/bert-emotion-classifier", filename="inference.py")
266
 
267
+ # load module
268
+ spec = importlib.util.spec_from_file_location("inference", path)
269
+ inference = importlib.util.module_from_spec(spec)
270
+ spec.loader.exec_module(inference)
 
 
271
 
272
+ # run prediction
273
  text = "I’m thrilled to win this award! 😄"
274
+ result, processed = inference.predict_emotions(text)
275
+
276
+ print("Input:", text)
277
+ print("Processed:", processed)
278
+ print("Predicted Emotions:", result)
279
  ```
280
 
281
  #### Expected Output:
 
320
  Run the following Python script to download `onnx_inference.py` and make predictions:
321
 
322
  ```python
323
+ # pip install transformers torch huggingface_hub emoji -q
324
 
 
 
325
  from huggingface_hub import hf_hub_download
326
+ import importlib.util
327
 
328
+ # download inference script
329
+ path = hf_hub_download(repo_id="logasanjeev/bert-emotion-classifier", filename="inference.py")
330
 
331
+ # load module
332
+ spec = importlib.util.spec_from_file_location("inference", path)
333
+ inference = importlib.util.module_from_spec(spec)
334
+ spec.loader.exec_module(inference)
 
 
335
 
336
+ # run prediction
337
  text = "I’m thrilled to win this award! 😄"
338
+ result, processed = inference.predict_emotions(text)
339
+
340
+ print("Input:", text)
341
+ print("Processed:", processed)
342
+ print("Predicted Emotions:", result)
343
  ```
344
 
345
  #### Expected Output:
 
401
  text = text.lower()
402
  return text
403
 
404
+ repo_id = "logasanjeev/bert-emotion-classifier"
405
  model = BertForSequenceClassification.from_pretrained(repo_id)
406
  tokenizer = BertTokenizer.from_pretrained(repo_id)
407