logasanjeev commited on
Commit
0d5a1ae
·
verified ·
1 Parent(s): ec8ac23

Fix inference

Browse files
Files changed (1) hide show
  1. README.md +14 -3
README.md CHANGED
@@ -22,14 +22,16 @@ widget:
22
  example_title: Nervousness Example
23
  inference:
24
  parameters:
25
- type: text-classification
26
- return_type: list
27
  script: inference.py
 
 
28
  ---
29
 
30
  # GoEmotions BERT Classifier
31
 
32
- This is a fine-tuned **BERT-base-uncased** model for multi-label emotion classification on the [GoEmotions dataset](https://huggingface.co/datasets/google-research-datasets/go_emotions), predicting 28 emotions (e.g., admiration, anger, joy, neutral).
33
 
34
  ## Model Details
35
 
@@ -43,6 +45,15 @@ This is a fine-tuned **BERT-base-uncased** model for multi-label emotion classif
43
  ## Try It Out
44
  For accurate predictions with optimized thresholds, use the [Gradio demo](https://logasanjeev-goemotions-bert-demo.hf.space).
45
 
 
 
 
 
 
 
 
 
 
46
  ## Performance
47
 
48
  - **Micro F1**: 0.6025 (optimized thresholds)
 
22
  example_title: Nervousness Example
23
  inference:
24
  parameters:
25
+ task: text-classification
26
+ output_type: list
27
  script: inference.py
28
+ base_model:
29
+ - google-bert/bert-base-uncased
30
  ---
31
 
32
  # GoEmotions BERT Classifier
33
 
34
+ Fine-tuned [BERT-base-uncased](https://huggingface.co/bert-base-uncased) on [go_emotions](https://huggingface.co/datasets/go_emotions) for multi-label classification (28 emotions).
35
 
36
  ## Model Details
37
 
 
45
  ## Try It Out
46
  For accurate predictions with optimized thresholds, use the [Gradio demo](https://logasanjeev-goemotions-bert-demo.hf.space).
47
 
48
+ ## Inference
49
+
50
+ The widget uses `inference.py` with optimized thresholds (`thresholds.json`, Micro F1: 0.6025). Try the [Gradio demo](https://logasanjeev-goemotions-bert-demo.hf.space) for a full interface.
51
+
52
+ ```python
53
+ from transformers import pipeline
54
+ classifier = pipeline("text-classification", model="logasanjeev/goemotions-bert", top_k=None)
55
+ print(classifier("Don't get hurt"))
56
+
57
  ## Performance
58
 
59
  - **Micro F1**: 0.6025 (optimized thresholds)