included swedish examples
Browse files
README.md
CHANGED
|
@@ -20,7 +20,7 @@ Accuracy on a number of experiments on a minimal test set (35 examples) can be f
|
|
| 20 |
|
| 21 |

|
| 22 |
|
| 23 |
-
Please note that these results are not a good representation of the model's actual performance. As previously stated, the test set is tiny and the examples in that test set are
|
| 24 |
|
| 25 |
Id to emotional label schema is as follows:
|
| 26 |
|
|
@@ -53,9 +53,11 @@ from setfit import SetFitModel
|
|
| 53 |
# Download from Hub and run inference
|
| 54 |
model = SetFitModel.from_pretrained("gilleti/emotional-classification")
|
| 55 |
# Run inference
|
| 56 |
-
preds = model(["
|
| 57 |
```
|
| 58 |
|
|
|
|
|
|
|
| 59 |
## BibTeX entry and citation info
|
| 60 |
|
| 61 |
```bibtex
|
|
|
|
| 20 |
|
| 21 |

|
| 22 |
|
| 23 |
+
Please note that these results are not a good representation of the model's actual performance. As previously stated, the test set is tiny and the examples in that test set are chosen to be good examples of the categories at hand. This is not the case with real life data. The model will be properly evaluated on real data at a later time.
|
| 24 |
|
| 25 |
Id to emotional label schema is as follows:
|
| 26 |
|
|
|
|
| 53 |
# Download from Hub and run inference
|
| 54 |
model = SetFitModel.from_pretrained("gilleti/emotional-classification")
|
| 55 |
# Run inference
|
| 56 |
+
preds = model(["Ingen tech-dystopi slår människans inre mörker", "Ina Lundström: Jag har två Bruce-tatueringar"])
|
| 57 |
```
|
| 58 |
|
| 59 |
+
This outputs predictions sadness/disappointment and absence of emotion. Please note that these examples are cherrypicked as most headlines (which is what the model is trained on) are rarely clear.
|
| 60 |
+
|
| 61 |
## BibTeX entry and citation info
|
| 62 |
|
| 63 |
```bibtex
|